Copy Versioned nodes to another repository workspace - jcr

Current running repository to move old files to archive repository.
Able to copy document nodes. The problem is with version nodes copy.
Trying to iterate, not quite sure on how to copy all properties of versions:
VersionManager versionManager1 =
oldChildNode.getSession().getWorkspace().getVersionManager();
versionManager1.checkout(oldChildNode.getPath());
VersionHistory versionHistory1 = versionManager1.getVersionHistory(oldChildNode.getPath());
VersionIterator versions = versionHistory1.getAllVersions();
while (versions.hasNext()) {
try {
Version versionedNode = versions.nextVersion();
NodeIterator nodeIterator = versionedNode.getNodes();
System.out.println(" Version is :: " + versionedNode.getName());
while (nodeIterator.hasNext()) {
Node currentNode = nodeIterator.nextNode();
System.out.println(" JCR Title :: " + currentNode.getName());
}
}catch(UnsupportedRepositoryOperationException jcrexce){
logger.info("Exception while accessing versioned nodes >> ");
jcrexce.printStackTrace();
} catch(PathNotFoundException pexec){
pexec.printStackTrace();
}
}
Document node --> nt:resource is added
Structure:
/**
* Document node
*/
[et:document] > nt:file, mix:title, mix:versionable, mix:shareable
+ * (nt:file) VERSION
- et:tags multiple
- et:role multiple
- et:docUserList multiple
- et:id (LONG)
- et:favourites (BOOLEAN)
- et:lastAccessed (STRING)
- et:lastAccessedOn (DATE)
- et:documentSize (LONG)
- et:fileOwnerName (STRING)
- et:fileOwnerId (STRING)
- * (undefined)
Any pointers on how to move versioned nodes(From repository to another repository and not workspace copy)?
Running on Java 1.6 with Jackrabbit 2.8.0

You may use RepositoryCopier API for that.
Basically, it allows you to move data across different repositories. For more information please have a look at:
https://jackrabbit.apache.org/api/1.6/org/apache/jackrabbit/core/RepositoryCopier.html

Related

Problem with getting rights data [for execution / read / write] file

I'm writing a program that should output a meta-info (size, permissions to execute / read / write, time of last modification) of all files from the specified directory.
I received information about all the information, except the rights to execute / read / write.
I tried to get this info using PosixFilePermissions, but when added to the List I get Exception in thread "main" java.lang.UnsupportedOperationException.
Maybe you should use some other library? Or did I make a mistake somewhere? I would be grateful for any advice!
fun long(path:Path) : MutableList<String> {
var listOfFiles = mutableListOf<String>()
val files = File("$path").listFiles()
var attr: BasicFileAttributes
Arrays.sort(files, NameFileComparator.NAME_COMPARATOR)
files.forEach {
if (it.isFile) {
attr = Files.readAttributes<BasicFileAttributes>(it.toPath(), BasicFileAttributes::class.java)
listOfFiles.add("${it.name} ${attr.size()} ${attr.lastModifiedTime()}" +
" ${PosixFilePermissions.toString(Files.getPosixFilePermissions(it.toPath()))}")
}
else listOfFiles.add("dir ${it.name}")
}
return listOfFiles
}
PosixFilePermissions are only usable for POSIX-compatible file systems (Linux etc.).
For a Windows system, the permissions have to be accessed directly:
file.canRead()
file.canWrite()
file.canExecute()

Apache-ignite: Persistent Storage

My understanding for Ignite Persistent Storage is that the data is not only saved in memory, but also written to disk.
When the node is restarted, it should read the data from disk to memory.
So, I am using this example to test it out. But I update it a little bit because I don't want to use xml.
This is my slightly updated code.
public class PersistentIgniteExpr {
/**
* Organizations cache name.
*/
private static final String ORG_CACHE = "CacheQueryExample_Organizations";
/** */
private static final boolean UPDATE = true;
public void test(String nodeId) {
// Apache Ignite node configuration.
IgniteConfiguration cfg = new IgniteConfiguration();
// Ignite persistence configuration.
DataStorageConfiguration storageCfg = new DataStorageConfiguration();
// Enabling the persistence.
storageCfg.getDefaultDataRegionConfiguration().setPersistenceEnabled(true);
// Applying settings.
cfg.setDataStorageConfiguration(storageCfg);
List<String> addresses = new ArrayList<>();
addresses.add("127.0.0.1:47500..47502");
TcpDiscoverySpi tcpDiscoverySpi = new TcpDiscoverySpi();
tcpDiscoverySpi.setIpFinder(new TcpDiscoveryMulticastIpFinder().setAddresses(addresses));
cfg.setDiscoverySpi(tcpDiscoverySpi);
try (Ignite ignite = Ignition.getOrStart(cfg.setIgniteInstanceName(nodeId))) {
// Activate the cluster. Required to do if the persistent store is enabled because you might need
// to wait while all the nodes, that store a subset of data on disk, join the cluster.
ignite.active(true);
CacheConfiguration<Long, Organization> cacheCfg = new CacheConfiguration<>(ORG_CACHE);
cacheCfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
cacheCfg.setBackups(1);
cacheCfg.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC);
cacheCfg.setIndexedTypes(Long.class, Organization.class);
IgniteCache<Long, Organization> cache = ignite.getOrCreateCache(cacheCfg);
if (UPDATE) {
System.out.println("Populating the cache...");
try (IgniteDataStreamer<Long, Organization> streamer = ignite.dataStreamer(ORG_CACHE)) {
streamer.allowOverwrite(true);
for (long i = 0; i < 100_000; i++) {
streamer.addData(i, new Organization(i, "organization-" + i));
if (i > 0 && i % 10_000 == 0)
System.out.println("Done: " + i);
}
}
}
// Run SQL without explicitly calling to loadCache().
QueryCursor<List<?>> cur = cache.query(
new SqlFieldsQuery("select id, name from Organization where name like ?")
.setArgs("organization-54321"));
System.out.println("SQL Result: " + cur.getAll());
// Run get() without explicitly calling to loadCache().
Organization org = cache.get(54321l);
System.out.println("GET Result: " + org);
}
}
}
When I run the first time, it works as intended.
After running it one time, I am assuming that data is written to disk since the code is about persistent storage.
When I run the second time, I commented out this part.
if (UPDATE) {
System.out.println("Populating the cache...");
try (IgniteDataStreamer<Long, Organization> streamer = ignite.dataStreamer(ORG_CACHE)) {
streamer.allowOverwrite(true);
for (long i = 0; i < 100_000; i++) {
streamer.addData(i, new Organization(i, "organization-" + i));
if (i > 0 && i % 10_000 == 0)
System.out.println("Done: " + i);
}
}
}
That is the part where data is written. When the sql query is executed, it is returning null. That means data is not written to disk?
Another question is I am not very clear about TcpDiscoverySpi. Can someone explain about it as well?
Thanks in advance.
Do you have any exceptions at node startup?
Very probably, you don't have IGNITE_HOME env variable configured. And the Work Directory for persistence is chosen somehow differently each time you run a node.
You can either setup IGNITE_HOME env variable or add a code line to setup workDirectory explicitly: cfg.setWorkDirectory("C:\\workDirectory");
TcpDiscoverySpi provides a way to discover remote nodes in a grid, so the starting node can join a cluster. It is better to use TcpDiscoveryVmIpFinder if you know the list of IPs. TcpDiscoveryMulticastIpFinder broadcasts UDP messages to a network to discover other nodes. It does not require IPs list at all.
Please see https://apacheignite.readme.io/docs/cluster-config for more details.

JGit log strange behavior after merge

Found a strange behavior (bug?) in a log command.
The below test creates a repo, creates a branch, does some commits either to the created branch and to master, then merges master to the created branch. After merge it tries to calculate the number of commits between the branch and master. Because master has been already merged -- the branch is not behind the master, i.e. corresponding commit count should be 0.
public class JGitBugTest {
#Rule
public TemporaryFolder tempFolder = new TemporaryFolder();
#Test
public void testJGitLogBug() throws Exception {
final String BRANCH_NAME = "TST-2";
final String MASTER_BRANCH_NAME = Constants.MASTER;
File folder = tempFolder.newFolder();
// Create a Git repository
Git api = Git.init().setBare( false ).setDirectory( folder ).call();
Repository repository = api.getRepository();
// Add an initial commit
api.commit().setMessage( "Initial commit" ).call();
// Create a new branch and add some commits to it
api.checkout().setCreateBranch( true ).setName( BRANCH_NAME ).call();
api.commit().setMessage( "TST-2 Added files 1" ).call();
// Add some commits to master branch too
api.checkout().setName( MASTER_BRANCH_NAME ).call();
api.commit().setMessage( "TST-1 Added files 1" ).call();
api.commit().setMessage( "TST-1 Added files 2" ).call();
// If this delay is commented out -- test fails and
// 'behind' is equal to "the number of commits to master - 1".
// Thread.sleep(1000);
// Checkout the branch and merge master to it
api.checkout().setName( BRANCH_NAME ).call();
api.merge()
.include( repository.resolve( MASTER_BRANCH_NAME ) )
.setStrategy( MergeStrategy.RECURSIVE )
.call()
.getNewHead()
.name();
// Calculate the number of commits the branch behind of the master
// It should be zero because we have merged master into the branch already.
Iterable<RevCommit> iterable = api.log()
.add( repository.resolve( MASTER_BRANCH_NAME ) )
.not( repository.resolve( BRANCH_NAME ) )
.call();
int behind = 0;
for( RevCommit commit : iterable ) {
behind++;
}
Assert.assertEquals( 0, behind );
}
}
The above test fails, behind yields the number of commits in the master minus 1.
Moreover, if 'sleep' in line 43 is uncommented -- the bug will go away, and 'behind' is equal to 0.
What do I do wrong? Is it a bug in JGit library or in my code?
Running the code on Windows, I can reproduce what you describe.
This looks like a bug in JGit to me. I recommend to open a bugzilla or post your findings to the mailing list.

Why can I not pickle my case classes? What should I do to solve this manually next time?

Edit 2: Observations and questions
I am pretty sure along with the commenter below Justin that the problem is due to an errant build.sbt configuration. However, this is the first time I have seen an errant build.sbt configuration that literally works for everything else except for pickers. Maybe that is becaus they use macros and I as a rule avoid them.
Why would it matter whether Flow.merge is used vs Flow.map if the problem is with the sbt?
Suspicious build.sbt extract
lazy val server = project
.dependsOn(sharedJvm, client)
Suspicious stack trace
So this is the top of the stack: it goes from a method I cannot find to the linking environment to the string encoding utils. Ok.
server java.lang.RuntimeException: stub
Huh? stub?
server at scala.sys.package$.error(package.scala:27)
server at scala.scalajs.runtime.package$.linkingInfo(package.scala:143)
server at scala.scalajs.runtime.package$.environmentInfo(package.scala:137)
HUH?
server at scala.scalajs.js.Dynamic$.global(Dynamic.scala:78)
???
server at boopickle.StringCodec$.encodeUTF8(StringCodec.scala:56)
Edit 1: My big and beautiful build.sbt might be the problem
What you cannot see is that I organized in my project folder:
JvmDependencies.scala which has regular Jvm dependencies
SjsDependencies.scala which has Def.settingsKeys of libraryDependencies on JsModuleIDs
WebJarDependencies.scala which has javascripts and css's
build.sbt
lazy val shared = (crossProject.crossType(CrossType.Pure) in file("shared"))
.configure(_.enablePlugins(ScalaJSPlugin))
.settings(SjsDependencies.pickling.toSettingsDefinition(): _*)
.settings(SjsDependencies.tagsAndDom.toSettingsDefinition(): _*)
.settings(SjsDependencies.css.toSettingsDefinition(): _*)
lazy val sharedJvm = shared.jvm
lazy val sharedJs = shared.js
lazy val cmdlne = project
.dependsOn(sharedJvm)
.settings(
libraryDependencies ++= (
JvmDependencies.commandLine ++
JvmDependencies.logging ++
JvmDependencies.akka ++
JvmDependencies.serialization
)
)
lazy val client = project
.enablePlugins(ScalaJSPlugin, SbtWeb, SbtSass)
.dependsOn(sharedJs)
.settings(
(SjsDependencies.shapeless ++ SjsDependencies.audiovideo ++ SjsDependencies.databind ++ SjsDependencies.functional ++ SjsDependencies.lensing ++ SjsDependencies.logging ++ SjsDependencies.reactive).toSettingsDefinition(),
jsDependencies ++= WebjarDependencies.js,
libraryDependencies ++= WebjarDependencies.notJs,
persistLauncher in Compile := true
)
lazy val server = project
.dependsOn(sharedJvm, client)
.enablePlugins(SbtNativePackager)
.settings(
copyWebJarResources := { streams.value.log("Copying webjar resources")
val `Web Modules target directory` = (resourceManaged in Compile).value / "assets"
val `Web Modules source directory` = (WebKeys.assets in Assets in client).value / "lib"
final class UsefulFileFilter(acceptable: String*) extends FileFilter {
// TODO ADJUST TO EXCLUDE JS MAP FILES
import scala.collection.JavaConversions._
def accept(file: File) = (file.isDirectory && FileUtils.listFiles(file, acceptable.toArray, true).nonEmpty) || acceptable.contains(file.ext) && !file.name.contains(".js.")
}
val `file filter` = new UsefulFileFilter("css", "scss", "sass", "less", "map")
IO.createDirectory(`Web Modules target directory`)
IO.copyDirectory(source = `Web Modules source directory`, target = `Web Modules target directory` / "script")
FileUtils.copyDirectory(`Web Modules source directory`, `Web Modules target directory` / "style", `file filter`)
},
// run the copy after compile/assets but before managed resources
copyWebJarResources <<= copyWebJarResources dependsOn(compile in Compile, WebKeys.assets in Compile in client, fastOptJS in Compile in client),
managedResources in Compile <<= (managedResources in Compile) dependsOn copyWebJarResources,
watchSources <++= (watchSources in client),
resourceGenerators in Compile <+= Def.task {
val files = ((crossTarget in(client, Compile)).value ** ("*.js" || "*.map")).get
val mappings: Seq[(File,String)] = files pair rebase((crossTarget in(client, Compile)).value, ((resourceManaged in Compile).value / "assets/").getAbsolutePath )
val map: Seq[(File, File)] = mappings.map { case (s, t) => (s, file(t))}
IO.copy(map).toSeq
},
reStart <<= reStart dependsOn (managedResources in Compile),
libraryDependencies ++= (
JvmDependencies.akka ++
JvmDependencies.jarlocating ++
JvmDependencies.functional ++
JvmDependencies.serverPickling ++
JvmDependencies.logging ++
JvmDependencies.serialization ++
JvmDependencies.testing
)
)
Edit 0: A very obscure chat thread has a guy saying what I am feeling: no, not **** scala, but
Mark Eibes #i-am-the-slime Oct 15 2015 09:37
#ochrons I'm still fighting. I can't seem to pickle anything anymore.
https://gitter.im/scala-js/scala-js/archives/2015/10/15
I have a rather simple requirement - I have one web socket route on a akka http server that is defined AkkaServerLogEventToMessageHandler():
object AkkaServerLogEventToMessageHandler
extends Directives {
val sourceOfLogs =
Source.actorPublisher[AkkaServerLogMessage](AkkaServerLogEventPublisher.props) map {
event ⇒
BinaryMessage(
ByteString(
Pickle.intoBytes[AkkaServerLogMessage](event)
)
)
}
def apply(): server.Route = {
handleWebSocketMessages(
Flow[Message].merge(sourceOfLogs)
)
}
}
This fits into a tiny set of routes in the most obvious way.
Now why is that I cannot get boopickle, upickle, or prickle to serialize something as simple as this stupid case class?
sealed case class AkkaServerLogMessage(
message: String,
level: Int,
timestamp: Long
)
No nesting
All primitive types
No generics
Only three of them
These all produced roughly the same error
Using all three of the common picklers to write
Using TextMessage instead of BinaryMessage and the corresponding upickle or prickle writeJs or whatever methods
Varying the case class down to nothing (nothing, as in no members)
Varying the input itself to the case class
Importing various permutations of Implicits and underscore stuff
... specifically, they gave me variations on the same stupid error (not the same error, but considerably similar)
server [ERROR] [04/21/2016 22:04:00.362] [app-akka.actor.default-dispatcher-7] [akka.actor.ActorSystemImpl(app)] WebSocket handler failed with stub
server java.lang.RuntimeException: stub
server at scala.sys.package$.error(package.scala:27)
server at scala.scalajs.runtime.package$.linkingInfo(package.scala:143)
server at scala.scalajs.runtime.package$.environmentInfo(package.scala:137)
server at scala.scalajs.js.Dynamic$.global(Dynamic.scala:78)
server at boopickle.StringCodec$.encodeUTF8(StringCodec.scala:56)
server at boopickle.Encoder.writeString(Codecs.scala:338)
server at boopickle.BasicPicklers$StringPickler$.pickle(Pickler.scala:183)
server at boopickle.BasicPicklers$StringPickler$.pickle(Pickler.scala:134)
server at boopickle.PickleState.pickle(Pickler.scala:511)
server at shindig.clientaccess.handler.AkkaServerLogEventToMessageHandler$$anonfun$1$Pickler$macro$1$2$.pickle(AkkaServerLogEventToMessageHandler.scala:35)
server at shindig.clientaccess.handler.AkkaServerLogEventToMessageHandler$$anonfun$1$Pickler$macro$1$2$.pickle(AkkaServerLogEventToMessageHandler.scala:35)
server at boopickle.PickleImpl$.apply(Default.scala:70)
server at boopickle.PickleImpl$.intoBytes(Default.scala:75)
server at shindig.clientaccess.handler.AkkaServerLogEventToMessageHandler$$anonfun$1.apply(AkkaServerLogEventToMessageHandler.scala:35)
server at shindig.clientaccess.handler.AkkaServerLogEventToMessageHandler$$anonfun$1.apply(AkkaServerLogEventToMessageHandler.scala:31)
This worked
Not using Flow.merge (defeats the purpose, I want to keep sending out put with logs)
Using a static value
Other useless things
Appeal
Please let me know where and why I am stupid... I spent four hours on this problem today in different forms, and it is driving me nuts.
In your build.sbt, you have:
lazy val shared = (crossProject.crossType(CrossType.Pure) in file("shared"))
.configure(_.enablePlugins(ScalaJSPlugin))
Do not do this. You must not enable the Scala.js plugin on a cross-project, ever. This also adds it to the JVM side, which will wreak havoc. Most notably, this will cause %%% to resolve the Scala.js artifacts of your dependencies in the JVM project, and that is really bad. This is what causes your issue.
crossProject already adds the Scala.js plugin to the JS part, and only that one. So simply remove that enablePlugins line.
Mystery solved. Thanks to #Justin du Coeur for pointing me in the right direction.
The reason boopickle wasn't working in particular was because in the dependency chain I was including both the sjs and the jvm version of boopickle in the server project.
I removed the server dependsOn for client and for sharedJs and also removed boopickle from the shared dependencies. Now it works.

Efficient Way to do batch import XMI in Enterprise Architect

Our team are using Enterprise Architect version 10 and SVN for the repository.
Because the EAP file size is quite big (e.g. 80 MB), we exports each packages into separate XMI and stored it into SVN. The EAP file itself is committed after some milestone. The problem is to synchronize the EAP file with work from co worker during development, we need to import lots of XMI (e.g. total can be 500 files).
I know that once the EAP file is updated, we can use Package Control -> Get All Latest. Therefore this problem occurs only during parallel development.
We have used keyboard shorcuts to do the import as follow:
Ctrl+Alt+I (Import package from XMI file)
Select the file name to import
Alt+I (Import)
Enter (Yes)
Repeat step number 2 to 4 until module finished
But still, importing hundreds of file is inefficient.
I've checked that the Control Package has Batch Import/Export. The batch import/export are working when I explicitly hard-coded the XMI Filename, but the options are not available if using version control (batch import/export options are greyed).
Is there any better ways to synchronize EAP and XMI files?
There is a scripting interface in EA. You might be able to automate the import using that. I've not used it but its probably quite good.
I'm not sure I fully understand your working environment, but I have some general points that may be of interest. It might be that if you use EA in a different way (especially my first point below), the need to batch import might go away.
Multiworker
First, multiple people can work on the same EAP file at a time. The EAP file is nothing more than an Access database file, and EA uses locking to stop multiple people editing the same package at the same time. But you can comfortably have multiple people editing different packages in one EAP file at the same time. Putting the EAP file on a file share somewhere is a good way of doing it.
Inbuilt Revision Control
Secondly, EA can interact directly with SVN (and other revision control systems). See this. In short, you can setup your EAP file so that individual packages (and everything below them) is SVN controlled. You can then check out an individual package, edit it, check it back in. Or indeed you can check out the whole branch below a package (including sub packages that are themselves SVN controlled).
Underneath the hood EA is importing and exporting XMI files and checking them in and out of SVN, whilst the EAP file is always the head revision. Just like what you're doing by hand, but automated. It makes sense given that you can all use the one single EAP file. You do have to be a bit careful rolling back - links originating from objects in older versions of one package might be pointing at objects that no longer exist (but you can look at the import log errors to see if this is the case). It takes a bit of getting used to, but it works pretty well.
There's also the built in package baselining functionality - that might be all you need anyway, and works quite well especially if you're all using the same EAP file.
Bigger Database Engine
Thirdly, you don't have to have an EAP file at all. The model's database can be in any suitable database system (MySQL, SQL Server, Oracle, etc). So that gives you all sorts of options for scaling up how its used, what its like over a WAN/Internet, etc.
In short Sparx have been quite sensible about how EA can be used in a multi-worker environment, and its worth exploiting that.
I have created the EA script using JScript for the automation
Here is the script to do the export:
!INC Local Scripts.EAConstants-JScript
/*
* Script Name : Export List of SVN Packages
* Author : SDK
* Purpose : Export a package and all of its subpackages information related to version
* controlled. The exported file then can be used to automatically import
* the XMIs
* Date : 30 July 2013
* HOW TO USE : 1. Select the package that you would like to export in the Project Browser
* 2. Change the output filepath in this script if necessary.
* By default it is "D:\\EAOutput.txt"
* 3. Send the output file to your colleague who wanted to import the XMIs
*/
var f;
function main()
{
// UPDATE THE FOLLOWING OUTPUT FILE PATH IF NECESSARY
var filename = "D:\\EAOutput.txt";
var ForReading = 1, ForWriting = 2, ForAppending = 8;
Repository.EnsureOutputVisible( "Script" );
Repository.ClearOutput( "Script" );
Session.Output("Start generating output...please wait...");
var treeSelectedType = Repository.GetTreeSelectedItemType();
switch ( treeSelectedType )
{
case otPackage:
{
var fso = new ActiveXObject("Scripting.FileSystemObject");
f = fso.OpenTextFile(filename, ForWriting, true);
var selectedObject as EA.Package;
selectedObject = Repository.GetContextObject();
reportPackage(selectedObject);
loopChildPackages(selectedObject);
f.Close();
Session.Output( "Done! Check your output at " + filename);
break;
}
default:
{
Session.Prompt( "This script does not support items of this type.", promptOK );
}
}
}
function loopChildPackages(thePackage)
{
for (var j = 0 ; j < thePackage.Packages.Count; j++)
{
var child as EA.Package;
child = thePackage.Packages.GetAt(j);
reportPackage(child);
loopChildPackages(child);
}
}
function getParentPath(childPackage)
{
if (childPackage.ParentID != 0)
{
var parentPackage as EA.Package;
parentPackage = Repository.GetPackageByID(childPackage.ParentID);
return getParentPath(parentPackage) + "/" + parentPackage.Name;
}
return "";
}
function reportPackage(thePackage)
{
f.WriteLine("GUID=" + thePackage.PackageGUID + ";"
+ "NAME=" + thePackage.Name + ";"
+ "VCCFG=" + getVCCFG(thePackage) + ";"
+ "XML=" + thePackage.XMLPath + ";"
+ "PARENT=" + getParentPath(thePackage).substring(1) + ";"
);
}
function getVCCFG(thePackage)
{
if (thePackage.IsVersionControlled)
{
var array = new Array();
array = (thePackage.Flags).split(";");
for (var z = 0 ; z < array.length; z++)
{
var pos = array[z].indexOf('=');
if (pos > 0)
{
var key = array[z].substring(0, pos);
var value = array[z].substring(pos + 1);
if (key=="VCCFG")
{
return (value);
}
}
}
}
return "";
}
main();
And the script to do the import:
!INC Local Scripts.EAConstants-JScript
/*
* Script Name : Import List Of SVN Packages
* Author : SDK
* Purpose : Imports a package with all of its sub packages generated from
* "Export List Of SVN Packages" script
* Date : 01 Aug 2013
* HOW TO USE : 1. Get the output file generated by "Export List Of SVN Packages" script
* from your colleague
* 2. Get the XMIs in the SVN local copy
* 3. Change the path to the output file in this script if necessary (var filename).
* By default it is "D:\\EAOutput.txt"
* 4. Change the path to local SVN
* 5. Run the script
*/
var f;
var svnPath;
function main()
{
// CHANGE THE FOLLOWING TWO LINES ACCORDING TO YOUR INPUT AND LOCAL SVN COPY
var filename = "D:\\EAOutput.txt";
svnPath = "D:\\svn.xxx.com\\yyy\\docs\\design\\";
var ForReading = 1, ForWriting = 2, ForAppending = 8;
Repository.EnsureOutputVisible( "Script" );
Repository.ClearOutput( "Script" );
Session.Output("[INFO] Start importing packages from " + filename + ". Please wait...");
var fso = new ActiveXObject("Scripting.FileSystemObject");
f = fso.OpenTextFile(filename, ForReading);
// Read from the file and display the results.
while (!f.AtEndOfStream)
{
var r = f.ReadLine();
parseLine(r);
Session.Output("--------------------------------------------------------------------------------");
}
f.Close();
Session.Output("[INFO] Finished");
}
function parseLine(line)
{
Session.Output("[INFO] Parsing " + line);
var array = new Array();
array = (line).split(";");
var guid;
var name;
var isVersionControlled;
var xmlPath;
var parentPath;
isVersionControlled = false;
xmlPath = "";
for (var z = 0 ; z < array.length; z++)
{
var pos = array[z].indexOf('=');
if (pos > 0)
{
var key = array[z].substring(0, pos);
var value = array[z].substring(pos + 1);
if (key=="GUID") {
guid = value;
} else if (key=="NAME") {
name = value;
} else if (key=="VCCFG") {
if (value != "") {
isVersionControlled = true;
}
} else if (key=="XML") {
if (isVersionControlled) {
xmlPath = value;
}
} else if (key=="PARENT") {
parentPath = value;
}
}
}
// Quick check for target if already exist to speed up process
var targetPackage as EA.Package;
targetPackage = Repository.GetPackageByGuid(guid);
if (targetPackage != null)
{
// target exists, do not do anything
Session.Output("[DEBUG] Target package \"" + name + "\" already exist");
return;
}
var paths = new Array();
var packages = new Array(paths.Count);
for (var i = 0; i < paths.Count; i++)
{
packages[i] = null;
}
paths = (parentPath).split("/");
if (paths.Count < 2)
{
Session.Output("[INFO] Skipped root or level1");
return;
}
packages[0] = selectRoot(paths[0]);
packages[1] = selectPackage(packages[0], paths[1]);
if (packages[1] == null)
{
Session.Output("[ERROR] Cannot find " + paths[0] + "/" + paths[1] + "in Project Browser");
return;
}
for (var j = 2; j < paths.length; j++)
{
packages[j] = selectPackage(packages[j - 1], paths[j]);
if (packages[j] == null)
{
Session.Output("[DEBUG] Creating " + packages[j].Name);
// create the parent package
var parent as EA.Package;
parent = Repository.GetPackageByGuid(packages[j-1].PackageGUID);
packages[j] = parent.Packages.AddNew(paths[j], "");
packages[j].Update();
parent.Update();
parent.Packages.Refresh();
break;
}
}
// Check if name (package to import) already exist or not
var targetPackage = selectPackage(packages[paths.length - 1], name);
if (targetPackage == null)
{
if (xmlPath == "")
{
Session.Output("[DEBUG] Creating " + name);
// The package is not SVN controlled
var newPackage as EA.Package;
newPackage = packages[paths.length - 1].Packages.AddNew(name,"");
Session.Output("New GUID = " + newPackage.PackageGUID);
newPackage.Update();
packages[paths.length - 1].Update();
packages[paths.length - 1].Packages.Refresh();
}
else
{
// The package is not SVN controlled
Session.Output("[DEBUG] Need to import: " + svnPath + xmlPath);
var project as EA.Project;
project = Repository.GetProjectInterface;
var result;
Session.Output("GUID = " + packages[paths.length - 1].PackageGUID);
Session.Output("GUID XML = " + project.GUIDtoXML(packages[paths.length - 1].PackageGUID));
Session.Output("XMI file = " + svnPath + xmlPath);
result = project.ImportPackageXMI(project.GUIDtoXML(packages[paths.length - 1].PackageGUID), svnPath + xmlPath, 1, 0);
Session.Output(result);
packages[paths.length - 1].Update();
packages[paths.length - 1].Packages.Refresh();
}
}
else
{
// target exists, do not do anything
Session.Output("[DEBUG] Target package \"" + name + "\" already exist");
}
}
function selectPackage(thePackage, childName)
{
var childPackage as EA.Package;
childPackage = null;
if (thePackage == null)
return null;
for (var i = 0; i < thePackage.Packages.Count; i++)
{
childPackage = thePackage.Packages.GetAt(i);
if (childPackage.Name == childName)
{
Session.Output("[DEBUG] Found " + childName);
return childPackage;
}
}
Session.Output("[DEBUG] Cannot find " + childName);
return null;
}
function selectRoot(rootName)
{
for (var y = 0; y < Repository.Models.Count; y++)
{
root = Repository.Models.GetAt(y);
if (root.Name == rootName)
{
return root;
}
}
return null;
}
main();