How do I use my own modules in a JuliaBox notebook? - module

I've recently started using JuliaBox for programming in Julia, and I want to use my own modules that I've previously written using the Juno-Atom IDE. I've uploaded the relevant modules to JuliaBox, but I am unable to call them from a JuliaBox notebook. The error message I get is as follows:
using MyModule
ArgumentError: Module MyModule not found in current path.
Run `Pkg.add("MyModule")` to install the MyModule package.
Stacktrace:
[1] _require(::Symbol) at ./loading.jl:435
[2] require(::Symbol) at ./loading.jl:405
[3] include_string(::String, ::String) at ./loading.jl:522
I originally had the module in a separate folder called 'modules', but even after moving it to the main folder (same location as the notebook), I still get the same error message.
I have ascertained the working directory:
pwd()
"/mnt/juliabox"
..and that seems to be the folder where my module is currently stored. At least, that's the directory which is displayed when I try to move the module file on the main JuliaBox screen.
I did try installing the module as an unregistered package under Package Builder (I was getting desperate!), but that didn't work either.
So I'm wondering whether I need to add something to the JULIA_LOAD_PATH in Environment Variables; however, that would seem to rather defeat the purpose of using an online version of Jupyter notebooks, which presumably is to allow easy access anywhere.
Anyway, I've run out of ideas, so if anyone could give me a clue as to where I am going wrong it would be very much appreciated.

If your module file is in the main folder, add it to the LOAD_PATH (it is not added by default). Customize the path if you put your file somewhere else.
#everywhere push!(LOAD_PATH, homedir())
import MyModule
or
include("MyModule.jl") # if it is already in pwd()
import MyModule
The issue is not related to JuliaBox or IJulia. That is how you import a Module. You either put the folder in LOAD_PATH or include the file containing the module.
https://docs.julialang.org/en/stable/manual/modules/#Relative-and-absolute-module-paths-1
I believe this issue on Github addressing the problem you are facing: https://github.com/JuliaLang/julia/issues/4600

I did try installing the module as an unregistered package under Package Builder (I was getting desperate!), but that didn't work either.
I think the package builder functionality is working properly. Just try creating a dummy module with the following structure and the contents:
~/MyModule.jl> tree
.
├── REQUIRE
└── src
├── functions
│   └── myfunc.jl
└── MyModule.jl
2 directories, 3 files
~/MyModule.jl> cat REQUIRE
julia 0.6
~/MyModule.jl> cat src/functions/myfunc.jl
myfunc(x) = 2x
~/MyModule.jl> cat src/MyModule.jl
module MyModule
export myfunc
include(joinpath("functions", "myfunc.jl"))
end
Then, git init a repository inside the directory, git add and git commit all the files, add a remote repository (like on GitHub or GitLab) with git remote add, and git push your local repository to the newly added remote repository. You should see that the unregistered package option is working as intended.
All that remains is to call
julia> using MyModule
julia> myfunc(10)
20
EDIT. You can try adding https://github.com/aytekinar/MyModule.jl as an unregistered package to your JuliaBox. That repository hosts the above-mentioned dummy module.

Related

Singularity and interior dynamic libraries

I am currently working on getting a bigger (C++) project inside a Singularity container. So far, everything works well, until I try to execute the container image, in which it won't find a dynamic library file that I previously build inside the container:
./MyProject.img
/<some path>/MyExecutable: error while loading shared libraries: libmongocxx.so._noabi: cannot open shared object file: No such file or directory
My first thought was that maybe the process of building this dependency inside the container did somehow not succeed, therefore I added ls /usr/local/lib/ at the end of the %post section of my recipe to check on that, but everything there is fine:
+ ls /usr/local/lib/
[...]
libmongocxx.so
libmongocxx.so.3.6.0
libmongocxx.so._noabi
[...]
So my next thought was that maybe the basic library folder is for some reason not a part of the environment variables of my container, so I extended the %post section with
export PATH=$PATH:/usr/local/lib/
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib/
still to no avail.
Is there some property of Singularity containers I am missing here? Do I need to somehow extract the dynamic library file to outside of the container? Or did I made some stupid mistake I just can't see here?
(I tagged the question only with singularity-container for now as I don't think this is anything specific to C++ here, but if somebody thinks otherwise feel free to add. My container uses Bootstrap: docker From: ubuntu:18.04, should that be relevant.)
Edit: I also explicitely gave the dynamic libraries execution rights, just in case, and printed their rights:
lrwxrwxrwx 1 root root 20 Sep 10 10:51 libmongocxx.so._noabi -> libmongocxx.so.3.6.0
Didn't work either.
My first guess is that your local environment is overwriting the variables in the image. You can use singularity run --cleanenv MyProject.img to prevent your current environment from persisting into the container. If there are variables you do want to pass in there, you can export SINGULARITYENV_SOMEVAR=foo to have SOMEVAR=foo set in the container environment.
If that doesn't do it, modify the %runscript to have a env | sort in there so you can see exactly what's set when it's attempting to run your code.

How to hack on installed perl6 module source?

I'd like to be able to view and make changes to the source code of installed (via zef) perl6 modules. How can I do that?
On my system, the module sources are under ~/.perl6/sources/ and there's also some kind of metadata file about the modules in ~/.perl6/dist/.
I can also use zef locate ... to show a module's source path, but making changes directly to the source files doesn't seem to have any effects (i.e., use the module from the REPL doesn't show my changes).
I'm guessing it's because the modules were pre-compiled, and perl6 doesn't pick up my changes and re-precompile the modules when I make changes directly to the module source files that way...
UPDATE: Deleting the corresponding pre-compiled files under ~/.perl6/precomp/... seems to work, but I'm not sure how and if that messes up anything.
I'd like to be able to view and make changes to the source code of installed (via zef) perl6 modules. How can I do that?
Please, don't do it that way. Installed modules are supposed to be immutable and as you've found out: if there is a pre-compiled version of a module available, it will not check if the original source file has been updated. That's because it doesn't have to, because it is considered immutable.
If you want to test changes on an installed module, please download the tar file / git clone the module's distribution, make changes you need in there, and then do:
zef install . --force-install
while in the top directory in the distribution. That will re-install the module and handle pre-compilation for you.

ImportError: No module named my project (sys.path is correct)

This is kind of embarassing because of how simple and common this problem is but I feel I've checked everything.
WSGI file is located at: /var/www/igfakes/server.wsgi
Apache is complaining that I can't import the module of my project, so I decided to start up a Python shell and see if it's any different - nope.
All the proof is in the following screenshot, I'll walk you through it.
First, see I cannot import my project
Then I import sys and check the path
Note /var/www in the path
Leave python
Check the directory, then confirm my project is in that same directory
My project is exactly where I'm specifying. Any idea what's going on?
I've followed a few different tutorials, all with the same instructions, like this one.

python: converting an egg-info directory to dist-info

I am working on an existing python application inside of a virtualenv environment. It is already set up to use wheel within its deployment.
I have added another module which my application now needs, and this module only exists in egg format. It is currently installed among all the other modules within ./env/lib/python3.6/site-packages, and an egg-info directory exists for it.
My question is this: how do I convert this one egg-info directory to wheel format, so that it gets included in the application's deployment when I do the following? ...
python3 setup.py bdist_wheel upload -r XXXXXXXX
Assuming I have installed a module under ./env/lib/python3.6/site-packages/the-module-1.2.3.egg-info, what are the steps to convert that module to dist-info?
Note that I don't see any *.egg file for that module, only the egg-info directory.
Thank you.

Why are `__init__.py` and `BUILD` needed inside TensorFlow's `models/tutorials/rnn/translate`?

Inside the tensorflow/models/tutorials/rnn/translate folder, we have a few files including __init__.py and BUILD.
Without __init__.py and BUILD files, the translate script can still manage to run.
What is the purpose of __init__.py and BUILD here? Are we supposed to install or build it using these two files?
The BUILD file supports using Bazel for hermetic building and testing of the model code. In particular a BUILD file is present in that directory to define the integration test translate_test.py and its dependencies, so that we can run it on continuous integration system (e.g. Jenkins).
The __init__.py file causes Python to treat that directory as a package. See this question for a discussion of why __init__.py is often present in a Python source directory. While this file is not strictly necessary to invoke translate.py directly from that directory, it is necessary if we want to import the code from translate.py into a different module.
(Note that when you run a Python binary through Bazel, the build system will automatically generate __init__.py files if they are missing. However, TensorFlow's repositories often have explicit __init__.py files in Python source directories so that you can run the code without invoking Bazel.)