Importing external libraries/Packages from an ImageJ/Fiji script - jython

I am writing an imageJ/Fiji plugin in Jython using the Fiji script editor. I need to import external numerical libraries such as ParallelColt to help me handle multidimensional matrices.
I began with ParallelColt by placing its jar file in the java folder inside Fiji:
"Fiji.app/java/macosx-java3d/Home/lib/ext”
Then I tried importing it by writing:
"import ParallelColt” or more specifically "from ParallelColt import jplasma"
And I get an error of module not found.
I tried placing the jar inside the Fiji plugins folder instead but with
still no success. I also tried by using the folder with all the java classes
of ParallelColt instead of the jar file and I still was not able to import
the classes from my script.
The question perhaps that I am simply asking is how to import java libraries from a Jython script. Any help will be greatly appreciated.

There are two issues here.
1. Dependencies
In order to depend on a complex third party software project with multiple JAR files, you need to make sure you have placed all the dependencies of the project (i.e., all the JAR files it needs) into ImageJ.app/jars.
One way to do it is to find the dependencies on Maven Central and then use a build tool such as Maven or Gradle to download them.
In your case, you can do this with Maven by crafting the following pom.xml file:
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>dummy</groupId>
<artifactId>dummy</artifactId>
<version>0.0.0-SNAPSHOT</version>
<dependencies>
<dependency>
<groupId>net.sourceforge.parallelcolt</groupId>
<artifactId>parallelcolt</artifactId>
<version>0.10.1</version>
</dependency>
<dependency>
<groupId>net.mikera</groupId>
<artifactId>vectorz</artifactId>
<version>0.34.0</version>
</dependency>
</dependencies>
</project>
Then executing:
mvn dependency:copy-dependencies
This will result in the following files in the target/dependencies folder:
29087 AMDJ-1.0.1.jar
8687 BTFJ-1.0.1.jar
19900 COLAMDJ-1.0.1.jar
55638 JKLU-1.0.0.jar
1194003 arpack_combined_all-0.1.jar
31162 core-lapack-0.1.jar
212836 csparsej-1.1.1.jar
88639 edn-java-0.4.4.jar
91793 jplasma-1.2.0.jar
762778 jtransforms-2.4.0.jar
237344 junit-4.8.2.jar
8693 mathz-0.3.0.jar
131210 netlib-java-0.9.3.jar
66134 optimization-1.0.jar
4111744 parallelcolt-0.10.1.jar
14028 randomz-0.3.0.jar
593015 vectorz-0.34.0.jar
Move these into your ImageJ.app/jars folder, and you should be good to go—though beware of multiple versions of the same library, since ImageJ cannot handle that right now.
2. Class references
The line from ParallelColt import jplasma is not valid. That is, there is no ParallelColt.jplasma Java class or package. You can verify this yourself from the command line:
$ find-class() { for f in *.jar; do result=$(jar tf "$f" | grep -l "$#"); if [ "$result" != "" ]; then echo $f; fi; done; }
$ cd ImageJ.app/jars
$ find-class jplasma
Two JAR files have classes containing that string:
core-lapack-0.1.jar
jplasma-1.2.0.jar
And taking a look inside of them quickly reveals that the package names do not start with ParallelColt:
$ jar tf jplasma-1.2.0.jar | grep jplasma
edu/emory/mathcs/jplasma/
edu/emory/mathcs/jplasma/benchmark/
edu/emory/mathcs/jplasma/test/
edu/emory/mathcs/jplasma/tdouble/
edu/emory/mathcs/jplasma/example/
edu/emory/mathcs/jplasma/benchmark/DgelsBenchmark.class
edu/emory/mathcs/jplasma/benchmark/DposvBenchmark.class
edu/emory/mathcs/jplasma/benchmark/DgesvBenchmark.class
edu/emory/mathcs/jplasma/Barrier.class
edu/emory/mathcs/jplasma/test/DposvTest.class
edu/emory/mathcs/jplasma/test/DgelsTest.class
edu/emory/mathcs/jplasma/test/DgesvTest.class
edu/emory/mathcs/jplasma/tdouble/Dgels.class
edu/emory/mathcs/jplasma/tdouble/Dgeqrf.class
edu/emory/mathcs/jplasma/tdouble/Dglobal$Dplasma_cntrl.class
edu/emory/mathcs/jplasma/tdouble/Dplasma.class
...
edu/emory/mathcs/jplasma/tdouble/Dglobal$Dplasma_aux.class
edu/emory/mathcs/jplasma/tdouble/Pdormqr.class
edu/emory/mathcs/jplasma/example/DposvExample.class
edu/emory/mathcs/jplasma/example/DgesvExample.class
edu/emory/mathcs/jplasma/example/DgelsExample.class
Rather, if you would like to use, e.g., edu.emory.mathcs.jplasma.tdouble.Dplasma, you need to import it as:
from edu.emory.mathcs.jplasma.tdouble import Dplasma
Here is a port of DgelsExample.java from the JPlasma distribution which works in the ImageJ Script Editor:
from edu.emory.mathcs.jplasma.tdouble import Dplasma
from java.lang import Math
import jarray
M = 15;
N = 10;
NRHS = 5;
A = jarray.zeros(M * N, 'd')
B = jarray.zeros(M * NRHS, 'd')
# Initialize A
for i in range(0, M):
for j in range(0, N):
A[M * j + i] = 0.5 - Math.random()
# Initialize B
for i in range(0, M):
for j in range(0, NRHS):
B[M * j + i] = Math.random()
# Plasma Initialize
Dplasma.plasma_Init(M, N, NRHS)
# Allocate T
T = Dplasma.plasma_Allocate_T(M, N)
# Solve the problem
INFO = Dplasma.plasma_DGELS(Dplasma.PlasmaNoTrans, M, N, NRHS, A, 0, M, T, 0, B, 0, M)
# Plasma Finalize
Dplasma.plasma_Finalize()
if INFO < 0:
print("-- Error in DgelsExample example !")
else:
print("-- Run successfull !")
See the Jython Scripting page of the ImageJ Wiki for further information on writing Jython scripts.
One final parting comment: the ImageJ Script Editor was really designed as a tool to develop single-class scripts and plugins that rely on components already present in the installation. My suggestion for a project like this with many dependencies would be to instead code in Java using an IDE such as Eclipse, because such IDEs offer many more productivity enhancing features. You can use AST-based auto-complete to explore the API of the libraries you're using, and browse javadocs and sources easily without doing web searches.

Related

Generate data file at install time

My python package depends on a static data file which is automatically generated from a smaller seed file using a function that is part of the package.
It makes sense to me to do this generation at the time of running setup.py install, is there a standard way in setup() to describe “run this function before installing this package's additional files” (the options in the docs are all static)? If not, where should I place the call to that function?
Best done in two steps using the cmdclass mechanism:
add a custom command to generate the data file
override build_py to call that before proceeding
from distutils.cmd import Command
from setuptools import setup
from setuptools.command.install import install
class GenerateDataFileCommand(Command):
description = 'generate data file'
user_options = []
def run(self):
pass # Do something here...
class InstallCommand(install):
def run(self):
self.run_command('generate_data_file')
return super().run()
setup(
cmdclass={
'generate_data_file': GenerateDataFileCommand,
'install': InstallCommand,
},
# ...
)
This way you can call python setup.py generate_data_file to generate the data file as a stand-alone step, but the usual setup procedure (python setup.py install) will also ensure it's called.
(However, I'd recommend including the built file in the distribution archive, so end users don't have to build it themselves – that is, override build_py (class setuptools.command.build_py.build_py) instead of install.)

How do I register "custom" Op (actually, from syntaxnet) with tensorflow serving?

I'm trying to serve a model exported from syntaxnet but the parser_ops are not available. The library file with the ops is found (out-of-tree) at:
../models/syntaxnet/bazel-out/local-opt/bin/syntaxnet/parser_ops.so
I'm currently hacking the mnist_inference example, (because I don't know how to build anything out-of-tree with bazel), and the command I'm running is:
./bazel-out/local-opt/bin/tensorflow_serving/example/mnist_inference --port=9000 /tmp/model/00000001
And the error I'm getting is:
F tensorflow_serving/example/mnist_inference.cc:208] Check failed: ::tensorflow::Status::OK() == (bundle_factory->CreateSessionBundle(bundle_path, &bundle)) (OK vs. Not found: Op type not registered 'FeatureSize')
And FeatureSize is definitely defined in the parser_ops.so, I just don't know how to load it.
I'm not too familiar with TF (I work on Bazel) but it looks like you need to add parser_ops as a dependency of mnist_inference.
There is a right way to do this and a wrong (easier) way.
The Right Way
Basically you add syntaxnet as a dependency of the example you're building. Unfortunately, the syntax net project and the tensorflow serving project import tensorflow itself under different names, so you have to do some mangling of the serving WORKSPACE file to get this working.
Add the following to the tensorflow_serving WORKSPACE file:
local_repository(
name = "syntaxnet",
path = "/path/to/your/checkout/of/models/syntaxnet",
)
This allows you to refer to the targets in syntaxnet from the tensorflow project (by prefixing them with "#syntaxnet"). Unfortunately, as mentioned above, you also have to get all of syntaxnet's external dependencies into the WORKSPACE file, which is annoying. You can test out if it's working with bazel build #syntaxnet//syntaxnet:parser_ops_cc.
Once you've done that, then add the cc_library #syntaxnet//syntaxnet:parser_ops_cc (parser_ops.so is a cc_binary, which can't be used as a dependency) to mnist_inference's deps:
deps = [
"#syntaxnet//syntaxnet:parser_ops_cc",
"#grpc//:grpc++",
...
Note that this still won't quite work: parser_ops_cc is a private target in syntaxnet (so it can't be depended on from outside its package) but you could add an attribute to it like visibility = ["//visibility:public"] if you're just trying things out:
cc_library(
name = "parser_ops_cc",
srcs = ["ops/parser_ops.cc"],
visibility = ["//visibility:public"]
...
The Wrong Way
You have a .so, which you can add a src file for your binary. Add the directory it's in as a new_local_repository() and add it to srcs in the BUILD file.
WORKSPACE file:
new_local_repository(
name = "hacky_syntaxnet",
path = "/path/to/syntaxnet/bazel-out/local-opt/bin/syntaxnet",
build_file_content = """
exports_files(glob(["*"])) # Make all of the files available.
""",
)
BUILD file:
srcs = [
"mnist_inference.cc",
"#hacky_syntaxnet//:parser_ops.so"
],

Using extended classes in gst (GNU smalltalk)?

This is a bit of a follow-up question to this one.
Say I've managed to extend the Integer class with a new method 'square'. Now I want to use it.
Calling the new method from within the file is easy:
Integer extend [
square [
| r |
r := self * self.
^r
]
]
x := 5 square.
x printNl.
Here, I can just call $ gst myprogram.st in bash and it'll print 25. But what if I want to use the method from inside the GNU smalltalk application? Like this:
$ gst
st> 5 square
25
st>
This may have to do with images, I'm not sure. This tutorial says I can edit the ~/.st/kernel/Builtins.st file to edit what files are loaded into the kernel, but I have no such file.
I would not edit what's loaded into the kernel. To elaborate on my comment, one way of loading previously created files into the environment for GNU Smalltalk, outside of using image files, is to use packages.
A sample package.xml file, which defines the package per the documentation, would look like:
<package>
<name>MyPackage</name>
<!-- Include any prerequisite packages here, if you need them -->
<prereq>PrequisitePackageName</prereq>
<filein>Foo.st</filein>
<filein>Bar.st</filein>
</package>
A sample Makefile for building the package might look like:
# MyPackage makefile
#
PACKAGE_DIR = ~/.st
PACKAGE_SPEC = package.xml
PACKAGE_FILE = $(PACKAGE_DIR)/MyPackage.star
PACKAGE_SRC = \
Foo.st \
Bar.st
$(PACKAGE_FILE): $(PACKAGE_SRC) $(PACKAGE_SPEC)
gst-package -t ~/.st $(PACKAGE_SPEC)
With the above files in your working directory containing Foo.st and Bar.st, you can do a make and it will build the .star package file and put it in ~/.st (where gst will go looking for packages as the first place to look). When you run your environment, you can then use PackageLoader to load it in:
$ gst
GNU Smalltalk ready
st> PackageLoader fileInPackage: 'MyPackage'
Loading package PrerequisitePackage
Loading package MyPackage
PackageLoader
st>
Then you're ready to rock and roll... :)

Shared SBT module in Intellij Idea without publish-local

SBT project A depends on B. Both projects have separate VCS repositories and have own lifecycle, including automated building and testing.
Is there a way to work with these projects conveniently in Intellij Idea?
By conveniently, I mostly mean:
CMD-Click points to actual, editable classes between projects, no read-only published jar
No need to run sbt publish-local every time a change is made
Breakpoints work as expected
Looks like all this is not possible if dependency is declared simply as libraryDependencies. However, declaring relationship with relative paths would fail build systems.
Here is what worked for me:
In Project A created a file local-dependencies.sbt (the name not important) with the following contents:
libraryDependencies ~= {l => l.filter(_.name != "my-utils")}
lazy val utils = RootProject(file("../my-utils"))
lazy val root = Project(id = "ProjectA", base = file(".")).dependsOn(utils)
The first line excludes my-utils from libraryDependencies defined in build.sbt which is needed for automated builds.
Now Project A has my-utils as Module dependency, and not Library dependency, which addresses the mentioned issues.
Note, that local-dependencies.sbt is for local env only and should be ignored in VCS, e.g.:
$ cat .hgignore
local-dependencies.sbt
Create IntelliJ Scala sbt Project A.
Then open build.sbt and enter:
name := "A"
version := "0.1"
scalaVersion := "2.12.7"
lazy val b = (project in file("B"))
lazy val root = (project in file("."))
.aggregate(b)
.dependsOn(b)
and save this. When asked "sbt projects need to be imported" in a small dialog box at the right bottom corner, select "Enable Auto-Import".
IntelliJ now creates the subfolder B automatically, and places a target subfolder therein. In subfolder B, create a new build.sbt for B containing:
sourceDirectory in Compile := (baseDirectory( _ / "src" )).value
After save, create new directories B/src/scala.
Open Tool Window for sbt (Menu: View -> Tool Windows -> sbt). Click "Refresh all sbt projects" icon.
Add your library scala Files into B/src/scala.
Now You have a multi module project, consisting out of main module A, depending on library sub-module B.

rpm spec file skeleton to real spec file

The aim is to have skeleton spec fun.spec.skel file which contains placeholders for Version, Release and that kind of things.
For the sake of simplicity I try to make a build target which updates those variables such that I transform the fun.spec.skel to fun.spec which I can then commit in my github repo. This is done such that rpmbuild -ta fun.tar does work nicely and no manual modifications of fun.spec.skel are required (people tend to forget to bump the version in the spec file, but not in the buildsystem).
Assuming the implied question is "How would I do this?", the common answer is to put placeholders in the file like ##VERSION## and then sed the file, or get more complicated and have autotools do it.
We place a version.mk file in our project directories which define environment variables. Sample content includes:
RELPKG=foopackage
RELFULLVERS=1.0.0
As part of a script which builds the RPM, we can source this file:
#!/bin/bash
. $(pwd)/Version.mk
export RELPKG RELFULLVERS
if [ -z "${RELPKG}" ]; then exit 1; fi
if [ -z "${RELFULLVERS}" ]; then exit 1; fi
This leaves us a couple of options to access the values which were set:
We can define macros on the rpmbuild command line:
% rpmbuild -ba --define "relpkg ${RELPKG}" --define "relfullvers ${RELFULLVERS}" foopackage.spec
We can access the environment variables using %{getenv:...} in the spec file itself (though this can be harder to deal with errors...):
%define relpkg %{getenv:RELPKG}
%define relfullvers %{getenv:RELFULLVERS}
From here, you simply use the macros in your spec file:
Name: %{relpkg}
Version: %{relfullvers}
We have similar values (provided by environment variables enabled through Jenkins) which provide the build number which plugs into the "Release" tag.
I found two ways:
a) use something like
Version: %(./waf version)
where version is a custom waf target
def version_fun(ctx):
print(VERSION)
class version(Context):
"""Printout the version and only the version"""
cmd = 'version'
fun = 'version_fun'
this checks the version at rpm build time
b) create a target that modifies the specfile itself
from waflib.Context import Context
import re
def bumprpmver_fun(ctx):
spec = ctx.path.find_node('oregano.spec')
data = None
with open(spec.abspath()) as f:
data = f.read()
if data:
data = (re.sub(r'^(\s*Version\s*:\s*)[\w.]+\s*', r'\1 {0}\n'.format(VERSION), data, flags=re.MULTILINE))
with open(spec.abspath(),'w') as f:
f.write(data)
else:
logs.warn("Didn't find that spec file: '{0}'".format(spec.abspath()))
class bumprpmver(Context):
"""Bump version"""
cmd = 'bumprpmver'
fun = 'bumprpmver_fun'
The latter is used in my pet project oregano # github