Saving ipython aliases - alias

In ipython 0.10 and 0.11, is there an easy way to make and save aliases? I know there is discussion of allowing store of aliases for 0.12, but what can I do with my students that will be easy. I'd like to save this alias:
alias rtupdate (cd ~/projects/researchtools; hg pull; hg update)
Is the only real option to edit ~/.ipython/ipythonrc or follow http://ipython.scipy.org/Wiki/tips for 0.10 or work with the alias manager in 0.11 (http://wiki.ipython.org/Cookbook/Moving_config_to_IPython_0.11) ?
Students each have their own VMWare Ubuntu 11.04 virtual machine with ipython 0.10.1. I can make this a separate shell executable block in org-mode and add makefiles that will remind people how to do a pull and update with mercurial, but I have yet to explain what a Makefile is. e.g. this kind of hint:
https://bitbucket.org/schwehr/researchtools/src/829773b7db64/Makefile

Are your students on their own machines, or do you control their environment?
If you want configuration to survive from one session to the next, the official way to do that is to edit your config, but there are other ways. For instance, you could write an IPython extension which defines extra aliases, and provide that to your students.
What may be easiest for your students, though, is to simply provide a script to run on startup, containing the lines you want to run, defining aliases, etc. You can call it something like init.ipy, then just instruct IPython to run the script. This can be done in config with InteractiveShellApp.exec_files, or you can just specify it at the command-line with ipython -i init.ipy, or at any later point with %run init.ipy.
Note that a script with the .ipy extension is allowed to have IPython commands (e.g. %alias rtupdate (cd ~/projects/researchtools; hg pull; hg update)), but if you use .py it is treated as a regular Python script.

Related

Could not find List::Util

I'm trying to compile some raku code I saw on https://replit.com/languages/raku. The code is from Why is Raku reporting "two terms in a row" when I define a new operator?.
It begins like this:
unit module Format;
use List::Util;
...
It fails to compile with:
 raku ./main.raku
===SORRY!=== Error while compiling /home/runner/l4gp3hvdnhd/./main.raku
Could not find List::Util in:
inst#/home/runner/.raku
inst#/opt/rakudo-pkg/share/perl6/site
inst#/opt/rakudo-pkg/share/perl6/vendor
inst#/opt/rakudo-pkg/share/perl6/core
ap#
nqp#
perl5#
at /home/runner/l4gp3hvdnhd/./main.raku:3
exit status 1
On the other hand I see this is a valid module - https://raku.land/zef:lizmat/List::Util.
Why is it failing?
TL;DR Run zef install --/test List::Util in the console, put use lib '.'; at the top of your Raku main.raku, and run, don't walk, with your program, before gremlins gleefully render your efforts in vain. Or maybe just listen to Liz and Rawley.
As Liz and Rawley have noted, you need List::Util installed.
But while I largely agree with them in practice (it may be a pain to use replit to do what you're trying to do) I think a different response to complement theirs might be helpful.
One of the ways replit is trying to distinguish itself from other online evaluators, is that it is trying to be akin to a full dev environment.
In reality it's early days in their ambitious project, and beggars can't be choosers (if you're not paying, it's hard to complain if things don't work out as you might want), but of particular relevance for this SO it is worth noting that it does have console/shell facilities and they've installed Rakudo Star, or perhaps just something like it, including the Raku package manager pretty much everyone uses (zef).
Thus this command, which I just ran in replit's console of a new raku session, worked:
zef install --/test List::Util;
(The --/test tells zef not to run tests. I've only got a free account and it looked like replit killed zef's process when I ran just zef install List::Util during its running of tests. Presumably they take too long, but I don't know.)
And then this main.raku also worked:
use lib '.'; # Tell Raku(do) libs are in current directory.
use List::Util <notall>; # Load and import `notall` from module.
say notall { 42 }, 99; # Try it.
But now the rub. As I was composing this answer, the expected happened. My internet connection momentarily flaked out, the replit rebooted the session, and while my main.raku code was rescued, both List::Util and my console history had disappeared, so I had to paste the install command again and rerun it to get the module installed again.
It's all just throwaway container magic, and there's only so much replit has done thus far to make the simulation of a real full local dev environment really work.
Maybe if your Internet connection is rock solid and/or you're using a paid replit account and/or it's the full moon, it'll all work out. Or maybe you're best off following Rawley's advice.
Speaking of which (I mean Rawley's advice to set up your Raku dev environment locally), if you do install locally you can also install the awesome free version of CommaIDE.
You do not have List::Util installed. Since you're using an online interpreter you will most likely have a lot of trouble doing this. Instead I recommend installing Raku on your local machine with rakubrew.
Then run the following commands:
rakubrew build # Make sure to follow the instructions at the end
rakubrew build-zef
zef install List::Util
Now you should be able to run your code on your local machine, and you'll have access to the List::Util library.

How to add a new version of VASP in pyiron

I have a new and compiled version of VASP that performs magnetic constrains of the local moment orientations on-the-fly, which I'd like to test and use with pyiron.
Please could you provide guidance and steps to follow in order to add this version of VASP to pyiron as one more executable?
Thank you,
Eduardo
You need to add corresponding run script(s) to the pyiron resources.
In your .pyiron config file the paths for the resources are stated as RESOURCE_PATHS = /comma/separated/list, /of/paths/to/the/resources.
In the resources you need to add the run scripts in the directory vasp/bin/ using the naming convention run_code_version[_mpie].sh. The actual run script is, of course, dependent on the cluster and the libraries used to build VASP and might look similar to
run_vasp_version.sh (single core version):
module load intel/...
srun -n 1 /path/to/the/new/executable/version/vasp_std
run_vasp_version_mpie.sh (mpi version):
module load intel/... impi/...
srun -n $1 /path/to/the/new/executable/version/vasp_std
Please have a look at the other vasp run scripts which you have used from the shared resources.
(For MPIE colleagues: Detailed examples for the setup/run scripts for vasp on cmti can be found in this privat repo.)

How can I install matplotlib for my AWS Elastic Beanstalk application?

I'm having a hell of a time deploying matplotlib on AWS Elastic Beanstalk. I gather that my issue comes from some dependencies and the way that EB deploys packages installed with PIP, and have attempted to follow the instructions here on SO for resolving the issue.
I first tried incrementally deploying, as suggested in the linked answer, by adding pieces of the matplotlib package stack to my requirements.txt file in stages. But this takes forever (for each stage) and is prone to failure and timing out (which seems to leave build directories behind that stall subsequent package installations).
So the simple solution mentioned off-handedly at the end of the answer appeals to me: just eb ssh, activate the virtialenv with
source /opt/python/run/venv/bin/activate
and pip install packages manually. But I can't get this to work either. First I'm often confronted with left-beind build directories (as mentioned above)
pip can't proceed with requirement 'xxxx' due to a pre-existing build directory.
location: /opt/python/run/venv/build/xxxx
This is likely due to a previous installation that failed.
pip is being responsible and not assuming it can delete this.
Please delete it and try again.
But even after removing these, I consistently get
Exception:
Traceback (most recent call last):
File "/opt/python/run/venv/lib/python2.7/site-packages/pip/basecommand.py", line 122, in main
status = self.run(options, args)
File "/opt/python/run/venv/lib/python2.7/site-packages/pip/commands/install.py", line 278, in run
requirement_set.prepare_files(finder, force_root_egg_info=self.bundle, bundle=self.bundle)
File "/opt/python/run/venv/lib/python2.7/site-packages/pip/req.py", line 1197, in prepare_files
do_download,
File "/opt/python/run/venv/lib/python2.7/site-packages/pip/req.py", line 1375, in unpack_url
self.session,
File "/opt/python/run/venv/lib/python2.7/site-packages/pip/download.py", line 582, in unpack_http_url
unpack_file(temp_location, location, content_type, link)
File "/opt/python/run/venv/lib/python2.7/site-packages/pip/util.py", line 625, in unpack_file
untar_file(filename, location)
File "/opt/python/run/venv/lib/python2.7/site-packages/pip/util.py", line 533, in untar_file
os.makedirs(location)
File "/opt/python/run/venv/lib64/python2.7/os.py", line 157, in makedirs
mkdir(name, mode)
OSError: [Errno 13] Permission denied: '/opt/python/run/venv/build/xxxx'
in response to pip install xxxx (and sudo pip fails with sudo: pip: command not found).
What can I do to get this working on AWS-EB? In particular, what do I need to do to get the simple SSH+PIP approach working; or is there some other better — simpler! — approach I should try.
FWIW, I have a .ebextensions/software.config with
packages:
yum:
gcc-c++: []
gcc-gfortran: []
python-devel: []
atlas-sse3-devel: []
lapack-devel: []
libpng-devel: []
freetype-devel: []
zlib-devel: []
and a requirements.txt that ends with
pytz==2014.10
pyparsing==2.0.3
python-dateutil==2.4.0
nose==1.3.4
six>=1.8.0
mock==1.0.1
numpy==1.9.1
matplotlib==1.4.2
After about 4 hours, I've gotten far as numpy (as reported by pip list in the EB virtualenv).
And (in case it matters) the user who is SSHing is part in a group with the policy
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"elasticbeanstalk:*",
"ec2:*",
"elasticloadbalancing:*",
"autoscaling:*",
"cloudwatch:*",
"s3:*",
"sns:*",
"cloudformation:*",
"rds:*",
"sqs:*",
"iam:PassRole"
],
"Resource": "*"
}
]
}
I have used many approaches to build and deploy numpy/scipy/matplotlib, on Windows as well as Linux systems. I have used system-provided package managers (aptitude, rpm), 3rd-party package managers (pypm), Python package managers (easy_install, pip), source releases, used different build environments/tools (GCC, but also Intel MKL, OpenMP). While doing so, I have run into many many quite annoying situations, but have also learned a lot about the pros and cons of each approach.
I have no experience with Elastic Beanstalk (EB), but I have experience with EC2. I see that you can SSH into an instance and poke around. So, what I suggest further below is based on
above-stated experiences and on
the more or less obvious boundary conditions regarding Beanstalk and on
your application scenario, described in another question here on SO and on
the fact that you just want to get things running, quickly
My suggestion: start off with not building these things yourself. Do not use pip. If possible, try to use the package manager of the Linux distribution in place and let it handle the installation of everything required for you, with a single command (e.g. sudo apt-get install python-matplotlib).
Disadvantages:
possibly old package versions, depending on the Linux distro in use
non-optimized builds (e.g. not built against e.g. Intel MKL or not leveraging OpenMP features or not using special instruction sets)
Advantages:
it quickly downloads, because packages are most likely cached near your machine
it quickly installs (these packages are pre-built, no compilation involved)
it just works
So, I hope you can just use aptitude or rpm or whatever on these machines and inherit the great work that the distribution package maintainers do for you, behind the scenes.
Once you are confident in your application and identified some bottleneck or issue, you might have reason to use a newer version of numpy/matplotlib/... or you might have reason to have a faster version of these, by creating an optimized build.
Edit: EB-related details of outlined approach
In the meantime we have learned that EB by default runs Amazon Linux which is based on Red Hat Enterprise Linux. Likewise, it uses yum as package manager and packages are in RPM format.
Amazon provides documentation about available packages. In Amazon Linux 2014.09, these packages are available: http://aws.amazon.com/de/amazon-linux-ami/2014.09-packages/
In this list we find
numpy-1.7.2
python-matplotlib-0.99.1.2
This version of matplotlib is very old, according to the changelog it is from September 2009: "2009-09-21 Tagged for release 0.99.1".
I did not anticipate it to be so old, but still, it might be sufficient for your needs. So we proceed with our plan (but I'd understand if that's a blocker).
Now, we have learned that system Python and EB Python are isolated from each other. That does not mean that EB Python cannot access system Python site packages. We just need it to tell so. A simple and clean method is to set up a proper directory structure with the packages that should be accessible to EB Python, and to communicate this directory to EB Python via sys.path.
Clearly, we need to customize the bootstrapping phase of EB containers. The available tools are documented here: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.html
Obviously, we want to make use of the packages approach, and tell EB to install the numpy and python-matplotlib packages via yum. So the corresponding config file section should contain:
packages:
yum:
numpy: []
python-matplotlib: []
Explicitly mentioning numpy might not be necessary, it likely is a dependency of python-matplotlib.
Also, we need to make use of the commands section:
You can use the commands key to execute commands on the EC2 instance.
The commands are processed in alphabetical order by name, and they run
before the application and web server are set up and the application
version file is extracted.
The following three commands create above-mentioned directory, and set up symbolic links to the numpy/mpl installation paths (these paths hopefully are available in the moment these commands become executed):
commands:
00-create-dir:
command: "mkdir -p /opt/py26-selected-site-packages"
01-link-numpy:
command: "ln -s /usr/lib64/python2.6/site-packages/numpy /opt/py26-selected-site-packages/numpy"
02-link-mpl:
command: "ln -s /usr/lib64/python2.6/site-packages/matplotlib /opt/py26-selected-site-packages/matplotlib"
Two uncertainties: the AWS docs to not clarify that packages are processed before commands are executed. You have to try. It it does not work, use container_commands. Secondly, it is just an educated guess that /usr/lib64/python2.6/site-packages/matplotlib is available after installing python-matplotlib. It should be installed to this place, but it may end up somewhere else. Needs to be tested. Numpy should end up where specified as inferred from this article.
[UPDATE FROM SEB]
AWS documentation says "The cfn-init helper script processes these configuration sections in the following order: packages, groups, users, sources, files, commands, and then services."
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-init.html
So, your approach is safe
[/UPDATE]
The crucial step, as pointed out in the comments to this answer, is to tell your Python app where to look for packages. Direct modification of sys.path before attempting to import is a reliable method to take control of this. The following code adds our special directory to the selection of directories in which Python looks out for packages, and then attempts to import matplotlib:
sys.path.append("/opt/py26-selected-site-packages")
from matplotlib import pyplot
The order in sys.path defines priorities, so in case there is any other matplotlib or numpy package available in one of the other directories, it might be a better idea to
sys.path.insert(0, "/opt/py26-selected-site-packages")
However, this should not be necessary if our whole approach was well thought-through.
To add to Jan-Philip Answer :
AWS Elastic Beanstalk is using Amazon Linux distribution (except for .Net environments). Amazon Linux uses the yum package manager. MatPlotLib is available in Amazon's software repository.
[ec2-user#ip-1-1-1-174 ~]$ yum list | grep matplot
python-matplotlib.x86_64 0.99.1.2-1.6.amzn1 amzn-main
If this version is the one you need for your application, I would try to simply modify your .ebextensions/software.config file and to add the package to the yum section of it:
packages:
yum:
python-matplotlib: []
python-devel: []
atlas-sse3-devel: []
lapack-devel: []
libpng-devel: []
freetype-devel: []
zlib-devel: []
A last note about AWS Elastic BeansTalk and SSH.
While Amazon gives you the possibility to SSH to your Elastic Beanstalk instances, you should use this possibility only for debugging purposes, to understand why your app failed or is not installing as suggested.
Other than that, your deployment must be 100% automatic. When Elastic Beanstalk (Auto Scaling to be precise) will scale out your infrastructure (add more instances) or scale it in (terminate instances) depending on your application workload, all your manual configuration will be lost.
Best practices is to not install SSH keys on your production environment, it further reduces the surface of attacks.
I might be a bit late to this question, but as AWS and a lot of the cloud service providers are moving into Docker and taking into consideration that you haven't specified the platform . I have a fast solution to your question:
Use the generic docker platform.
I created some images with Python, Numpy, Scipy and Matplotlib preinstalled, so you can directly pull and start using them with one line of code.
Python 2.7(This one also has the versions that you were specifying for numpy and matplotlib)
sudo docker pull chuseuiti/pynuscimat2.7
Python 3.4
sudo docker pull chuseuiti/pynusci
However you can create your own image or modify existing images.
In case you want to automate your instances, you can pass a Dockerfile to AWS with the definition of your image.
Tip, in case you don't know about docker:
It is need to login before been able to pull:
sudo docker login
After pulling the image, you can generate and work in a container created from an image with the next code:
sudo docker run -i -t chuseuiti/pynuscimat2.7 bash
PS. At least with the free tier AWS is always complaining about running out of time with scipy and matplotlib, it takes too much time to install them, that is why I use this option.

Reading profile script in non-interactive mode with AIX implementation of ksh

Please note that this is an AIX related question.
I have a jenkins server running on Redhat which is running a node via SSH on an AIX server.
The commands are run non-interactively using SSH to a user on the AIX machine who has ksh as its standard shell.
The problem is that this build needs a number of environment variables, and i can't seem to get it to work.
I have tried:
Jenkins allows me to set some environment variables for the session. So i tried:
ENV="$HOME/.profile"
I tried creating a .kshrc file containing
. .profile
But none of these approaches seems to make KSH run the .profile script.
The .profile script contains the environment setup for the user i need.
How do i get an AIX implementation of KSH to run my .profile script before executing commands?
You need to specifically tell Jenkins that you want to execute them in ksh shell.
By default, Jenkins runs as sh <commands>.
Add a shebang in your shell command as first line,
#!/bin/ksh
Most shells don't source their .profile files on non-interactive sessions. A simple solution is to source the .profile yourself as part of the command you are sending.
So instead of
yourcommand1; yourcommand2
you should send
. ~/.profile; yourcommand1; yourcommand2
over ssh
UPDATE after reading the comment about Jenkins controlling the ssh command
In the case your ssh command is performed by Jenkins you should have a look at https://wiki.jenkins-ci.org/display/JENKINS/SSH+Slaves+plugin, especially the 'Login profile files' paragraph.
I'd say one of these solutions is best
Set all environment variables from Jenkins using the node's configure page. Install the EnvInject plugin to do this.
Write a wrapper around the java command on the slave that sources your profile script and adjust the JavaPath (also on the node's configure page) to point to that wrapper.
The only way I know of for setting environment variables that will apply for non-interactive shells on AIX is via /etc/environment. I believe this is the correct place, but it will of course then apply to all users and all shells.

Multiple Job (j3)

I am trying to run a GNU make file with multiple jobs.
When I try executing ' make.exe -r -j3', the receive the following to errors:
make.exe: Do not specify -j or --jobs if sh.exe is not available.
make.exe: Resetting make for single job mode.
Do I have to add ' $(SH) -c' somewhere in the makefile? If so, where?
The error message suggests that make cannot find sh.exe. The file names indicate you are probably on CygWin. I would investigate setting the PATH to include the location of sh.exe, or defining the value of SHELL to the name (or, even, full path) of your shell.
Are you running this on Windows (more specifically, in the "windows" shell?). If you are, you might want to read this:
http://www.gnu.org/software/make/manual/make.html#Parallel
more specifically:
On MS-DOS, the ‘-j’ option has no effect, since that system doesn't support multi-processing.
Once again, assuming you're running on windows, you should get MinGW or CygWin