Test file structure in groovy(Spock) - testing

How to test created and expected file tree in groovy(Spock)?
Right now I'm using Set where I specify paths which I expect to get and collecting actual paths in this way:
Set<String> getCreatedFilePaths(String root) {
Set<String> createFilePaths = new HashSet<>()
new File(root).eachFileRecurse {
createFilePaths << it.absolutePath
}
return createFilePaths
}
But the readability of the test isn't so good.
Is it possible in groovy to write expected paths as a tree, and after that compare with actual
For example, expected:
region:
usa:
new_york.json
california.json
europe:
spain.json
italy.json
And actual will be converted to this kind of tree.

Not sure if you can do it with the built-in recursive methods. There certainly are powerful ones, but this is standard recursion code you can use:
def path = new File("/Users/me/Downloads")
def printTree(File file, Integer level) {
println " " * level + "${file.name}:"
file.eachFile {
println " " * (level + 1) + it.name
}
file.eachDir {
printTree(it, level + 1)
}
}
printTree(path, 1)
That prints the format you describe

You can either build your own parser or use Groovy's built-in JSON parser:
package de.scrum_master.stackoverflow
import groovy.json.JsonParserType
import groovy.json.JsonSlurper
import spock.lang.Specification
class FileRecursionTest extends Specification {
def jsonDirectoryTree = """{
com : {
na : {
tests : [
MyBaseIT.groovy
]
},
twg : {
sample : {
model : [
PrimeNumberCalculatorSpec.groovy
]
}
}
},
de : {
scrum_master : {
stackoverflow : [
AllowedPasswordsTest.groovy,
CarTest.groovy,
FileRecursionTest.groovy,
{
foo : [
LoginIT.groovy,
LoginModule.groovy,
LoginPage.groovy,
LoginValidationPage.groovy,
User.groovy
]
},
LuceneTest.groovy
],
testing : [
GebTestHelper.groovy,
RestartBrowserIT.groovy,
SampleGebIT.groovy
]
}
}
}"""
def "Parse directory tree JSON representation"() {
given:
def jsonSlurper = new JsonSlurper(type: JsonParserType.LAX)
def rootDirectory = jsonSlurper.parseText(jsonDirectoryTree)
expect:
rootDirectory.de.scrum_master.stackoverflow.contains("CarTest.groovy")
rootDirectory.com.twg.sample.model.contains("PrimeNumberCalculatorSpec.groovy")
when:
def fileList = objectGraphToFileList("src/test/groovy", rootDirectory)
fileList.each { println it }
then:
fileList.size() == 14
fileList.contains("src/test/groovy/de/scrum_master/stackoverflow/CarTest.groovy")
fileList.contains("src/test/groovy/com/twg/sample/model/PrimeNumberCalculatorSpec.groovy")
}
List<File> objectGraphToFileList(String directoryPath, Object directoryContent) {
List<File> files = []
directoryContent.each {
switch (it) {
case String:
files << directoryPath + "/" + it
break
case Map:
files += objectGraphToFileList(directoryPath, it)
break
case Map.Entry:
files += objectGraphToFileList(directoryPath + "/" + (it as Map.Entry).key, (it as Map.Entry).value)
break
default:
throw new IllegalArgumentException("unexpected directory content value $it")
}
}
files
}
}
Please note:
I used new JsonSlurper(type: JsonParserType.LAX) in order to avoid having to quote each single String in the JSON structure. If your file names contain spaces or other special characters, you will have to use something like "my file name", though.
In rootDirectory.de.scrum_master.stackoverflow.contains("CarTest.groovy") you can see how you can nicely interact with the parsed JSON object graph in .property syntax. You might like it or not, need it or not.
Recursive method objectGraphToFileList converts the parsed object graph to a list of files (if you prefer a set, change it, but File.eachFileRecurse(..) should not yield any duplicates, so the set is not needed.
If you do not like the parentheses etc. in the JSON, you can still build your own parser.
You might want to add another utility method to create a JSON string like the given one from a validated directory structure, so you have less work when writing similar tests.

Modified Bavo Bruylandt answer to collect file tree paths, and sort it to not care about the order of files.
def "check directory structure"() {
expect:
String created = getCreatedFilePaths(new File("/tmp/region"))
String expected = new File("expected.txt").text
created == expected
}
private String getCreatedFilePaths(File root) {
List paths = new ArrayList()
printTree(root, 0, paths)
return paths.join("\n")
}
private void printTree(File file, Integer level, List paths) {
paths << ("\t" * level + "${file.name}:")
file.listFiles().sort{it.name}.each {
if (it.isFile()) {
paths << ("\t" * (level + 1) + it.name)
}
if (it.isDirectory()) {
collectFileTree(it, level + 1, paths)
}
}
}
And expected files put in the expected.txt file with indent(\t) in this way:
region:
usa:
new_york.json
california.json
europe:
spain.json
italy.json

Related

How should I bundle a library of text files with my module?

I have the following structure in the resources directory in a module I'm building:
resources
|-- examples
|-- Arrays
| |-- file
|-- Lists
|-- file1
|-- file2
I have the following code to collect and process these files:
use v6.d;
unit module Doc::Examples::Resources;
class Resource {
has Str $.name;
has Resource #.resources;
has Resource %.resource-index;
method resource-names() {
#.resources>>.name.sort
}
method list-resources() {
self.resource-names>>.say;
}
method is-resource(Str:D $lesson) {
$lesson ~~ any self.resource-names;
}
method get-resource(Str:D $lesson) {
if !self.is-resource($lesson) {
say "Sorry, that lesson does not exist.";
return;
}
return %.resource-index{$lesson};
}
}
class Lesson is Resource {
use Doc::Parser;
use Doc::Subroutines;
has IO $.file;
method new(IO:D :$file) {
my $name = $file.basename;
self.bless(:$name, :$file)
}
method parse() {
my #parsed = parse-file $.file.path;
die "Failed parse examples from $.file" if #parsed.^name eq 'Any';
for #parsed -> $section {
my $heading = $section<meta>[0] || '';
my $intro = $section<meta>[1] || '';
say $heading.uc ~ "\n" if $heading && !$intro;
say $heading.uc if $heading && $intro;
say $intro ~ "\n" if $intro;
for $section<code>.Array {
die "Failed parse examples from $.file, check it's syntax." if $_.^name eq 'Any';
das |$_>>.trim;
}
}
}
}
class Topic is Resource {
method new(IO:D :$dir) {
my $files = dir $?DISTRIBUTION.content("$dir");
my #lessons;
my $name = $dir.basename;
my %lesson-index;
for $files.Array -> $file {
my $lesson = Lesson.new(:$file);
push #lessons, $lesson;
%lesson-index{$lesson.name} = $lesson;
}
self.bless(:$name, resources => #lessons, resource-index => %lesson-index);
}
}
class LocalResources is Resource is export {
method new() {
my $dirs = dir $?DISTRIBUTION.content('resources/examples');
my #resources;
my %resource-index;
for $dirs.Array -> $dir {
my $t = Topic.new(:$dir);
push #resources, $t;
%resource-index{$t.name} = $t;
}
self.bless(:#resources, :%resource-index)
}
method list-lessons(Str:D $topic) {
self.get-resource($topic).list-lessons;
}
method parse-lesson(Str:D $topic, Str:D $lesson) {
self.get-resource($topic).get-resource($lesson).parse;
}
}
It works. However, I'm told that this is not reliable and there there is no guarantee that lines like my $files = dir $?DISTRIBUTION.content("$dir"); will work after the module is installed or will continue to work into the future.
So what are better options for bundling a library of text files with my module that can be accessed and found by the module?
Files under the resources directory will always be available as keys to the %?RESOURCES compile-time variable if you declare them in the META6.json file this way:
"resources": [
"examples/Array/file",
]
and so on.
I've settled on a solution. As pointed out by jjmerelo, the META6.json file contains a list of resources and, if you use the comma IDE, the list of resources is automatically generated for you.
From within the module's code, the list of resources can be accessed via the $?DISTRIBUTION variable like so:
my #resources = $?DISTRIBUTION.meta<resources>
From here, I can build up my list of resources.
One note on something I discovered: the $?DISTRIBUTION variable is not accessible from a test script. It has to be placed inside a module in the lib directory of the distribution and exported.

Terraform 0.12: Output list of buckets, use as input for another module and iterate

I'm using Tf 0.12. I have an s3 module that outputs a list of buckets, that I would like to use as an input for a cloudfront module that I've got.
The problem I'm facing is that when I do terraform plan/apply I get the following error count.index is 0 |var.redirect-buckets is tuple with 1 element
I've tried all kinds of splats moving the count.index call around to no avail. My sample code is below.
module.s3
resource "aws_s3_bucket" "redirect" {
count = length(var.redirects)
bucket = element(var.redirects, count.index)
}
mdoule.s3.output
output "redirect-buckets" {
value = [aws_s3_bucket.redirect.*]
}
module.cdn.variables
...
variable "redirect-buckets" {
description = "Redirect buckets"
default = []
}
....
The error is thrown down here
module.cdn
resource "aws_cloudfront_distribution" "redirect" {
count = length(var.redirect-buckets)
default_cache_behavior {
// Line below throws the error, one amongst many
target_origin_id = "cloudfront-distribution-origin-${var.redirect-buckets[count.index]}.s3.amazonaws.com"
....
//Another error throwing line
target_origin_id = "cloudfront-distribution-origin-${var.redirect-buckets[count.index]}.s3.amazonaws.com"
Any help is greatly appreciated.
module.s3
resource "aws_s3_bucket" "redirects" {
for_each = var.redirects
bucket = each.value
}
Your variable definition for redirects needs to change to something like this:
variable "redirects" {
type = map(string)
}
module.s3.output:
output "redirect_buckets" {
value = aws_s3_bucket.redirects
}
module.cdn
resource "aws_cloudfront_distribution" "redirects" {
for_each = var.redirect_buckets
default_cache_behavior {
target_origin_id = "cloudfront-distribution-origin-${each.value.id}.s3.amazonaws.com"
}
Your variable definition for redirect-buckets needs to change to something like this (note underscores, using skewercase is going to behave strangely in some cases, not worth it):
variable "redirect_buckets" {
type = map(object(
{
id = string
}
))
}
root module
module "s3" {
source = "../s3" // or whatever the path is
redirects = {
site1 = "some-bucket-name"
site2 = "some-other-bucket"
}
}
module "cdn" {
source = "../cdn" // or whatever the path is
redirects_buckets = module.s3.redirect_buckets
}
From an example perspective, this is interesting, but you don't need to use outputs from S3 here since you could just hand the cdn module the same map of redirects and use for_each on those.
There is a tool called Terragrunt which wraps Terraform and supports dependencies.
https://terragrunt.gruntwork.io/docs/features/execute-terraform-commands-on-multiple-modules-at-once/#dependencies-between-modules

Correct usage of SetRootType in FlatBuffers

I'm trying to set a new custom root before parsing a JSon into a structure via flatbuffers.
The Corresponding FSB has a root_type already and I want to override it only to be able to parse it into a struct once.
The SetRootType("NonRootStructInFbsT") fails
The documentation of the API says, this can be used to override the current root which is exactly what I want to do.
std::string schemaText;
std::string schemaFile("MySchema.fbs");
if(not flatbuffers::FileExists(schemaFile.c_str())) {
error("Schema file inaccessible: ", schemaFile);
return nullptr;
}
if(not flatbuffers::LoadFile(schemaFile.c_str(), false, &schemaText)) {
error(TAG, "Failed to load schema file: ", schemaFile);
return nullptr;
}
info("Read schema file: ", schemaText.size(), schemaText);
flatbuffers::Parser parser;
if(not parser.SetRootType("NonRootStructInFbsT")) {
error("Unable to set root type: ", customRoot);
return nullptr;
}
info("Set the root type: ", customRoot);
I always get the error message
Unable to set root type: NonRootStructInFbsT
The root of a FlatBuffer can only be a table, so root_type and SetRootType will reject names of anything else, like a struct or a union.
Furthermore, the fact that the name ends in T appears to refer to an "object API" type. These are names purely in the generated code, you need to supply names as they are in the schema.
The posted code lacks call of flatbuffers::Parser::Parse with schema contents. Your code:
std::string schemaText;
std::string schemaFile("MySchema.fbs");
if(not flatbuffers::FileExists(schemaFile.c_str())) {
error("Schema file inaccessible: ", schemaFile);
return nullptr;
}
if(not flatbuffers::LoadFile(schemaFile.c_str(), false, &schemaText)) {
error(TAG, "Failed to load schema file: ", schemaFile);
return nullptr;
}
info("Read schema file: ", schemaText.size(), schemaText);
flatbuffers::Parser parser;
It is all OK, but here we have contents of schema file in schemaText and empty parser with default options. Too early to set root types. We should read the schema to parser with something like:
if(not parser.Parse(schemaText)) {
error(TAG, "Defective schema: ", parser.error_);
return nullptr;
}
After that if we reached here we have parser with schema and so in that schema we can chose also root type:
if(not parser.SetRootType("NonRootStructInFbsT")) {
error("Unable to set root type: ", customRoot);
return nullptr;
}
info("Set the root type: ", customRoot);
Note that parser.error_ is quite informative about any errors you face during parser usage.

Perl6: How to find all installed modules whose filename matches a pattern?

Is it possible in Perl6 to find all installed modules whose file-name matches a pattern?
In Perl5 I would write it like this:
use File::Spec::Functions qw( catfile );
my %installed;
for my $dir ( #INC ) {
my $glob_pattern = catfile $dir, 'App', 'DBBrowser', 'DB', '*.pm';
map { $installed{$_}++ } glob $glob_pattern;
}
There is currently no way to get the original file name of an installed module. However it is possible to get the module names
sub list-installed {
my #curs = $*REPO.repo-chain.grep(*.?prefix.?e);
my #repo-dirs = #curs>>.prefix;
my #dist-dirs = |#repo-dirs.map(*.child('dist')).grep(*.e);
my #dist-files = |#dist-dirs.map(*.IO.dir.grep(*.IO.f).Slip);
my $dists := gather for #dist-files -> $file {
if try { Distribution.new( |%(from-json($file.IO.slurp)) ) } -> $dist {
my $cur = #curs.first: {.prefix eq $file.parent.parent}
take $_ for $dist.hash<provides>.keys;
}
}
}
.say for list-installed();
see: Zef::Client.list-installed()

Efficient Way to do batch import XMI in Enterprise Architect

Our team are using Enterprise Architect version 10 and SVN for the repository.
Because the EAP file size is quite big (e.g. 80 MB), we exports each packages into separate XMI and stored it into SVN. The EAP file itself is committed after some milestone. The problem is to synchronize the EAP file with work from co worker during development, we need to import lots of XMI (e.g. total can be 500 files).
I know that once the EAP file is updated, we can use Package Control -> Get All Latest. Therefore this problem occurs only during parallel development.
We have used keyboard shorcuts to do the import as follow:
Ctrl+Alt+I (Import package from XMI file)
Select the file name to import
Alt+I (Import)
Enter (Yes)
Repeat step number 2 to 4 until module finished
But still, importing hundreds of file is inefficient.
I've checked that the Control Package has Batch Import/Export. The batch import/export are working when I explicitly hard-coded the XMI Filename, but the options are not available if using version control (batch import/export options are greyed).
Is there any better ways to synchronize EAP and XMI files?
There is a scripting interface in EA. You might be able to automate the import using that. I've not used it but its probably quite good.
I'm not sure I fully understand your working environment, but I have some general points that may be of interest. It might be that if you use EA in a different way (especially my first point below), the need to batch import might go away.
Multiworker
First, multiple people can work on the same EAP file at a time. The EAP file is nothing more than an Access database file, and EA uses locking to stop multiple people editing the same package at the same time. But you can comfortably have multiple people editing different packages in one EAP file at the same time. Putting the EAP file on a file share somewhere is a good way of doing it.
Inbuilt Revision Control
Secondly, EA can interact directly with SVN (and other revision control systems). See this. In short, you can setup your EAP file so that individual packages (and everything below them) is SVN controlled. You can then check out an individual package, edit it, check it back in. Or indeed you can check out the whole branch below a package (including sub packages that are themselves SVN controlled).
Underneath the hood EA is importing and exporting XMI files and checking them in and out of SVN, whilst the EAP file is always the head revision. Just like what you're doing by hand, but automated. It makes sense given that you can all use the one single EAP file. You do have to be a bit careful rolling back - links originating from objects in older versions of one package might be pointing at objects that no longer exist (but you can look at the import log errors to see if this is the case). It takes a bit of getting used to, but it works pretty well.
There's also the built in package baselining functionality - that might be all you need anyway, and works quite well especially if you're all using the same EAP file.
Bigger Database Engine
Thirdly, you don't have to have an EAP file at all. The model's database can be in any suitable database system (MySQL, SQL Server, Oracle, etc). So that gives you all sorts of options for scaling up how its used, what its like over a WAN/Internet, etc.
In short Sparx have been quite sensible about how EA can be used in a multi-worker environment, and its worth exploiting that.
I have created the EA script using JScript for the automation
Here is the script to do the export:
!INC Local Scripts.EAConstants-JScript
/*
* Script Name : Export List of SVN Packages
* Author : SDK
* Purpose : Export a package and all of its subpackages information related to version
* controlled. The exported file then can be used to automatically import
* the XMIs
* Date : 30 July 2013
* HOW TO USE : 1. Select the package that you would like to export in the Project Browser
* 2. Change the output filepath in this script if necessary.
* By default it is "D:\\EAOutput.txt"
* 3. Send the output file to your colleague who wanted to import the XMIs
*/
var f;
function main()
{
// UPDATE THE FOLLOWING OUTPUT FILE PATH IF NECESSARY
var filename = "D:\\EAOutput.txt";
var ForReading = 1, ForWriting = 2, ForAppending = 8;
Repository.EnsureOutputVisible( "Script" );
Repository.ClearOutput( "Script" );
Session.Output("Start generating output...please wait...");
var treeSelectedType = Repository.GetTreeSelectedItemType();
switch ( treeSelectedType )
{
case otPackage:
{
var fso = new ActiveXObject("Scripting.FileSystemObject");
f = fso.OpenTextFile(filename, ForWriting, true);
var selectedObject as EA.Package;
selectedObject = Repository.GetContextObject();
reportPackage(selectedObject);
loopChildPackages(selectedObject);
f.Close();
Session.Output( "Done! Check your output at " + filename);
break;
}
default:
{
Session.Prompt( "This script does not support items of this type.", promptOK );
}
}
}
function loopChildPackages(thePackage)
{
for (var j = 0 ; j < thePackage.Packages.Count; j++)
{
var child as EA.Package;
child = thePackage.Packages.GetAt(j);
reportPackage(child);
loopChildPackages(child);
}
}
function getParentPath(childPackage)
{
if (childPackage.ParentID != 0)
{
var parentPackage as EA.Package;
parentPackage = Repository.GetPackageByID(childPackage.ParentID);
return getParentPath(parentPackage) + "/" + parentPackage.Name;
}
return "";
}
function reportPackage(thePackage)
{
f.WriteLine("GUID=" + thePackage.PackageGUID + ";"
+ "NAME=" + thePackage.Name + ";"
+ "VCCFG=" + getVCCFG(thePackage) + ";"
+ "XML=" + thePackage.XMLPath + ";"
+ "PARENT=" + getParentPath(thePackage).substring(1) + ";"
);
}
function getVCCFG(thePackage)
{
if (thePackage.IsVersionControlled)
{
var array = new Array();
array = (thePackage.Flags).split(";");
for (var z = 0 ; z < array.length; z++)
{
var pos = array[z].indexOf('=');
if (pos > 0)
{
var key = array[z].substring(0, pos);
var value = array[z].substring(pos + 1);
if (key=="VCCFG")
{
return (value);
}
}
}
}
return "";
}
main();
And the script to do the import:
!INC Local Scripts.EAConstants-JScript
/*
* Script Name : Import List Of SVN Packages
* Author : SDK
* Purpose : Imports a package with all of its sub packages generated from
* "Export List Of SVN Packages" script
* Date : 01 Aug 2013
* HOW TO USE : 1. Get the output file generated by "Export List Of SVN Packages" script
* from your colleague
* 2. Get the XMIs in the SVN local copy
* 3. Change the path to the output file in this script if necessary (var filename).
* By default it is "D:\\EAOutput.txt"
* 4. Change the path to local SVN
* 5. Run the script
*/
var f;
var svnPath;
function main()
{
// CHANGE THE FOLLOWING TWO LINES ACCORDING TO YOUR INPUT AND LOCAL SVN COPY
var filename = "D:\\EAOutput.txt";
svnPath = "D:\\svn.xxx.com\\yyy\\docs\\design\\";
var ForReading = 1, ForWriting = 2, ForAppending = 8;
Repository.EnsureOutputVisible( "Script" );
Repository.ClearOutput( "Script" );
Session.Output("[INFO] Start importing packages from " + filename + ". Please wait...");
var fso = new ActiveXObject("Scripting.FileSystemObject");
f = fso.OpenTextFile(filename, ForReading);
// Read from the file and display the results.
while (!f.AtEndOfStream)
{
var r = f.ReadLine();
parseLine(r);
Session.Output("--------------------------------------------------------------------------------");
}
f.Close();
Session.Output("[INFO] Finished");
}
function parseLine(line)
{
Session.Output("[INFO] Parsing " + line);
var array = new Array();
array = (line).split(";");
var guid;
var name;
var isVersionControlled;
var xmlPath;
var parentPath;
isVersionControlled = false;
xmlPath = "";
for (var z = 0 ; z < array.length; z++)
{
var pos = array[z].indexOf('=');
if (pos > 0)
{
var key = array[z].substring(0, pos);
var value = array[z].substring(pos + 1);
if (key=="GUID") {
guid = value;
} else if (key=="NAME") {
name = value;
} else if (key=="VCCFG") {
if (value != "") {
isVersionControlled = true;
}
} else if (key=="XML") {
if (isVersionControlled) {
xmlPath = value;
}
} else if (key=="PARENT") {
parentPath = value;
}
}
}
// Quick check for target if already exist to speed up process
var targetPackage as EA.Package;
targetPackage = Repository.GetPackageByGuid(guid);
if (targetPackage != null)
{
// target exists, do not do anything
Session.Output("[DEBUG] Target package \"" + name + "\" already exist");
return;
}
var paths = new Array();
var packages = new Array(paths.Count);
for (var i = 0; i < paths.Count; i++)
{
packages[i] = null;
}
paths = (parentPath).split("/");
if (paths.Count < 2)
{
Session.Output("[INFO] Skipped root or level1");
return;
}
packages[0] = selectRoot(paths[0]);
packages[1] = selectPackage(packages[0], paths[1]);
if (packages[1] == null)
{
Session.Output("[ERROR] Cannot find " + paths[0] + "/" + paths[1] + "in Project Browser");
return;
}
for (var j = 2; j < paths.length; j++)
{
packages[j] = selectPackage(packages[j - 1], paths[j]);
if (packages[j] == null)
{
Session.Output("[DEBUG] Creating " + packages[j].Name);
// create the parent package
var parent as EA.Package;
parent = Repository.GetPackageByGuid(packages[j-1].PackageGUID);
packages[j] = parent.Packages.AddNew(paths[j], "");
packages[j].Update();
parent.Update();
parent.Packages.Refresh();
break;
}
}
// Check if name (package to import) already exist or not
var targetPackage = selectPackage(packages[paths.length - 1], name);
if (targetPackage == null)
{
if (xmlPath == "")
{
Session.Output("[DEBUG] Creating " + name);
// The package is not SVN controlled
var newPackage as EA.Package;
newPackage = packages[paths.length - 1].Packages.AddNew(name,"");
Session.Output("New GUID = " + newPackage.PackageGUID);
newPackage.Update();
packages[paths.length - 1].Update();
packages[paths.length - 1].Packages.Refresh();
}
else
{
// The package is not SVN controlled
Session.Output("[DEBUG] Need to import: " + svnPath + xmlPath);
var project as EA.Project;
project = Repository.GetProjectInterface;
var result;
Session.Output("GUID = " + packages[paths.length - 1].PackageGUID);
Session.Output("GUID XML = " + project.GUIDtoXML(packages[paths.length - 1].PackageGUID));
Session.Output("XMI file = " + svnPath + xmlPath);
result = project.ImportPackageXMI(project.GUIDtoXML(packages[paths.length - 1].PackageGUID), svnPath + xmlPath, 1, 0);
Session.Output(result);
packages[paths.length - 1].Update();
packages[paths.length - 1].Packages.Refresh();
}
}
else
{
// target exists, do not do anything
Session.Output("[DEBUG] Target package \"" + name + "\" already exist");
}
}
function selectPackage(thePackage, childName)
{
var childPackage as EA.Package;
childPackage = null;
if (thePackage == null)
return null;
for (var i = 0; i < thePackage.Packages.Count; i++)
{
childPackage = thePackage.Packages.GetAt(i);
if (childPackage.Name == childName)
{
Session.Output("[DEBUG] Found " + childName);
return childPackage;
}
}
Session.Output("[DEBUG] Cannot find " + childName);
return null;
}
function selectRoot(rootName)
{
for (var y = 0; y < Repository.Models.Count; y++)
{
root = Repository.Models.GetAt(y);
if (root.Name == rootName)
{
return root;
}
}
return null;
}
main();