pkg_resources.DistributionNotFound for Ryu-oe - sdn

My goal is to have a optical LINC switch running and use Ryu-oe to control it.
I receive the following error when I try to run Ryu-Oe instruction from this link.
Ryu-oe is just ryu controller with some optical extensions.
File "/usr/local/bin/ryu-manager", line 5, in <module>
from pkg_resources import load_entry_point
File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 2749, in <module>
working_set = WorkingSet._build_master()
File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 446, in _build_master
return cls._build_from_requirements(__requires__)
File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 459, in _build_from_requirements
dists = ws.resolve(reqs, Environment())
File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 628, in resolve
raise DistributionNotFound(req)
pkg_resources.DistributionNotFound: msgpack-python>=0.4.0
Anyone knows how I can solve the error?

Ok it seems that the problem is solved. To be honest I don't know how it was solved. Here are some of the commands I ran:
Make sure you are in ryu-oe directory.
sudo -H ./run_tests.sh
sudo ./run_tests.sh
sudo -H python ./setup.py install
and then I ran sudo ryu-manager ~/ryu-oe/ryu/app/ofctl_rest.py.
Let me know which one worked for you so that we come up with a better answer.

This command worked for me:
$ sudo pip install --upgrade msgpack-python

Related

how to solve snakemake 5.32.0 env problem

i had meet a problem when i run snakemake in cluster system, "missingoutputfile" and i had searched trying to solve the problem, maybe because it is "run" derived, not "shell", and it is a bug, in the new verision you had solve this problem, when i update it (verison 5.32.0). but another problem is raising up.
Error in rule predict_plasforest:
jobid: 34
output: linear_plasmid_genome/DP-Sample058-S54-adapter-phix-moving-sickle-sss_contigs_1kb.csv
log: log/isolating-linear-contig/DP-Sample058-S54-adapter-phix-moving-sickle-sss_linear_plasforest.out, log/isolating-linear-contig/DP-Sample058-S54-adapter-phix-moving-sickle-sss_linear_plasforest.err (check log file(s) for error message)
conda-env: /home/projects/ku_00041/apps/wanli/F_pipeline/conda_envs/60d0848d
shell:
export PATH=$PATH:/home/projects/ku_00041/apps/wanli/F_pipeline/db/blast/bin;cp /home/projects/ku_00041/apps/wanli/F_pipeline/db/plasforest/plasmid_refseq.* .;cp /home/projects/ku_00041/apps/wanli/F_pipeline/db/plasforest/plasforest.sav .;python3 /home/projects/ku_00041/apps/wanli/F_pipeline/db/plasforest/PlasForest.py -i assmebly_res/DP-Sample058-S54-adapter-phix-moving-sickle-sss_contigs_1kb.fasta -r -b -f --threads 30 -o linear_plasmid_genome/DP-Sample058-S54-adapter-phix-moving-sickle-sss_contigs_1kb.csv 2>log/isolating-linear-contig/DP-Sample058-S54-adapter-phix-moving-sickle-sss_linear_plasforest.err >log/isolating-linear-contig/DP-Sample058-S54-adapter-phix-moving-sickle-sss_linear_plasforest.out
(one of the commands exited with non-zero exit code; note that snakemake uses bash strict mode!)
cluster_jobid: 30135295
when i see the log file:
Traceback (most recent call last):
File "/home/projects/ku_00041/apps/wanli/F_pipeline/db/plasforest/PlasForest.py", line 26, in <module>
from sklearn.ensemble import RandomForestClassifier
ModuleNotFoundError: No module named 'sklearn'
but in the conda-env, it has already install sklean. and when i activate this env and reun the same command, it is working.
did you know how to solve this problem?

Accessing Kaggle tools in VM by mounting key

I am trying to use kaggle command line tool and I am running into problems with using it inside my own vm. I downloaded the API token from the site and placed it in /.kaggle/kaggle.json on windows. My vm has ubuntu installed and in the Vagrant file I have the following:
config.vm.synced_folder ENV['HOME'] + "/.kaggle", "/home/ubuntu/.kaggle", mount_options: ['dmode=700,fmode=700']
config.vm.provision "shell", inline: <<-SHELL
echo "export KAGGLE_CONFIG_DIR='/home/ubuntu/.kaggle/kaggle.json'" >> /etc/profile.d/myvar.sh
SHELL
when running env command in the vm I see it is correct:
KAGGLE_CONFIG_DIR=/home/ubuntu/.kaggle/kaggle.json
However, when I try to use the kaggle command for example kaggle -h I get the the following
(main) vagrant#dev:/home/ubuntu/.kaggle$ ls
kaggle.json
(main) vagrant#dev:/home/ubuntu/.kaggle$ kaggle -h
Traceback (most recent call last):
File "/user/home/venvs/main/bin/kaggle", line 5, in <module>
from kaggle.cli import main
File "/user/home/venvs/main/lib/python3.7/site-packages/kaggle/__init__.py", line 23, in <module>
api.authenticate()
File "/user/home/venvs/main/lib/python3.7/site-packages/kaggle/api/kaggle_api_extended.py", line 149, in authenticate
self.config_file, self.config_dir))
OSError: Could not find kaggle.json. Make sure it's located in /home/ubuntu/.kaggle/kaggle.json. Or use the environment method.
The paths are all correct and the file is where it should be looking for it. Anyone know what the issue could be? Is it because it is mounted?
Alright, I misread the instructions: "You can define a shell environment variable KAGGLE_CONFIG_DIR to change this location to $KAGGLE_CONFIG_DIR/kaggle.json"
So the env variable should be /home/ubuntu/.kaggle/ instead of /home/ubuntu/.kaggle/kaggle.json.

OSError: [Errno 8] when running selenium in python in a docker container

I've recently learned the basics of Docker and how to create and run images. I'm trying to create an image of a python script that scrapes some webpages for data and uploads it to a server. I'm using Selenium, Chromium, and a Windows chromedriver. I'm trying to build the image on my Windows machine and be able to deploy it on a bunch of Linux/Windows servers. Currently, I'm only building and running on the same Windows machine, just until I get it running, but I keep getting the same error, even though the script runs just fine directly on the machine itself.
This is the error:
Traceback (most recent call last):
File "my-app.py", line 796, in <module>
startScraper();
File "my-app.py", line 92, in startScraper
browser = webdriver.Chrome(chrome_options = options, executable_path = path_to_chromedriver);
File "/usr/local/lib/python3.6/site-packages/selenium/webdriver/chrome/webdriver.py", line 62, in __init__
self.service.start()
File "/usr/local/lib/python3.6/site-packages/selenium/webdriver/common/service.py", line 74, in start
stdout=self.log_file, stderr=self.log_file)
File "/usr/local/lib/python3.6/subprocess.py", line 707, in __init__
restore_signals, start_new_session)
File "/usr/local/lib/python3.6/subprocess.py", line 1326, in _execute_child
raise child_exception_type(errno_num, err_msg)
OSError: [Errno 8] Exec format error
It seems to be related to the chrome options but even when I remove all the "add-argument" options, the error persists, but here are the options:
options = webdriver.ChromeOptions();
options.binary_location = './chrome-win32/chrome.exe';
options.add_argument('headless')
options.add_argument('window-size=1400x1300')
options.add_argument('--mute-audio')
options.add_argument('--disable-web-security');
options.add_argument('--allow-running-insecure-content');
options.add_argument('--ignore-certificate-errors')
options.add_argument('--ignore-ssl-errors')
prefs = {"profile.managed_default_content_settings.images":2}
options.add_experimental_option("prefs", prefs);
path_to_chromedriver = './chromedriver.exe';
Is there anything that I'm missing to be able to run this scraper in a container? Thanks!
EDIT: I forgot to add the Dockerfile and how I build/run the image:
Dockerfile:
FROM python:3.6.0
WORKDIR /my-app
ADD . /my-app
RUN pip install -r requirements.txt
ENV NAME Scraper
CMD ["python", "My_App.py"]
Build/Run image:
- docker build -t myapp
- docker run myapp
Maybe there are some options that I don't know about that I'm missing?
You are trying to run a exe inside a linux container and that is not going to work. You will need to install chrome and chromedriver inside your Dockerfile and update the code to use the correct path
FROM python:3.6.0
RUN apt update && apt install -y chromedriver
WORKDIR /my-app
ADD . /my-app
RUN pip install -r requirements.txt
ENV NAME Scraper
CMD ["python", "My_App.py"]
Change your code to
options = webdriver.ChromeOptions();
options.add_argument('headless')
options.add_argument('window-size=1400x1300')
options.add_argument('--mute-audio')
options.add_argument('--disable-web-security');
options.add_argument('--allow-running-insecure-content');
options.add_argument('--ignore-certificate-errors')
options.add_argument('--ignore-ssl-errors')
prefs = {"profile.managed_default_content_settings.images":2}
options.add_experimental_option("prefs", prefs);
path_to_chromedriver = '/usr/lib/chromium/chromedriver';

Tensoflow: How to solve "ImportError: libcudnn.so.6:"

i used tensorflow ver0.12.0. i wanna use ver 0.10 so i run pip install tensorflow
then when i use tensorflow ,
`Traceback (most recent call last):
File "image_zooms_training.py", line 6, in
from keras.models import Sequential
File "/home/satan/anaconda3/envs/py27/lib/python2.7/site-packages/keras/init.py", line 2, in
from . import backend
........
ImportError: libcudnn.so.6: cannot open shared object file: No such file or directory
Failed to load the native TensorFlow runtime.
`
this error is happend .
Please tell me how to solve.
Thank you for your help.
enviroment
ubuntu14.04
python2.7
download libcudnn.so.6 in https://developer.nvidia.com/cudnn.
put it to /usr/local/cuda/lib64/ folder
and use next commond
sudo chmod u=rwx,g=rx,o=rx libcudnn.so.6.5.18
sudo ln -s libcudnn.so.6.5.18 libcudnn.so.6
sudo ln -s libcudnn.so.6 libcudnn.so
Change directory to
/home/username/anaconda3/envs/tensorflow/lib/

Is it possible to use pip to install a package over ssh in a self-hosted gitlab?

I have a self-hosted gitlab and I would like to install a package hosted there using ssh.
I tried:
pip install git+ssh://git#<my_domain>:se7entyse7en/<project_name>.git
Here's the output:
Downloading/unpacking git+ssh://git#<my_domain>:se7entyse7en/<project_name>.git
Cloning ssh://git#<my_domain>:se7entyse7en/<project_name>.git to /var/folders/3r/v7swlvdn2p7_wyh9wj90td2m0000gn/T/pip-4_JdRU-build
ssh: Could not resolve hostname <my_domain>:se7entyse7en: nodename nor servname provided, or not known
fatal: Could not read from remote repository.
Please make sure you have the correct access rights and the repository exists.
Update:
I tried to upload it on gitlab.com and after having uploaded the repo I tried to install it by running:
pip install git+ssh://git#gitlab.com:loumarvincaraig/<project_name>.git
but the nothing changed. In particular here's the content of pip.log:
/Users/se7entyse7en/Envs/test/bin/pip run on Mon Nov 17 22:14:51 2014
Downloading/unpacking git+ssh://git#gitlab.com:loumarvincaraig/<project_name>.git
Cloning ssh://git#gitlab.com:loumarvincaraig/<project_name>.git to /var/folders/3r/v7swlvdn2p7_wyh9wj90td2m0000gn/T/pip-91JVFi-build
Found command 'git' at '/usr/local/bin/git'
Running command /usr/local/bin/git clone -q ssh://git#gitlab.com:loumarvincaraig/<project_name>.git /var/folders/3r/v7swlvdn2p7_wyh9wj90td2m0000gn/T/pip-91JVFi-build
Complete output from command /usr/local/bin/git clone -q ssh://git#gitlab.com:loumarvincaraig/<project_name>.git /var/folders/3r/v7swlvdn2p7_wyh9wj90td2m0000gn/T/pip-91JVFi-build:
Cleaning up...
Command /usr/local/bin/git clone -q ssh://git#gitlab.com:loumarvincaraig/<project_name>.git /var/folders/3r/v7swlvdn2p7_wyh9wj90td2m0000gn/T/pip-91JVFi-build failed with error code 128 in None
Exception information:
Traceback (most recent call last):
File "/Users/se7entyse7en/Envs/test/lib/python2.7/site-packages/pip/basecommand.py", line 134, in main
status = self.run(options, args)
File "/Users/se7entyse7en/Envs/test/lib/python2.7/site-packages/pip/commands/install.py", line 236, in run
requirement_set.prepare_files(finder, force_root_egg_info=self.bundle, bundle=self.bundle)
File "/Users/se7entyse7en/Envs/test/lib/python2.7/site-packages/pip/req.py", line 1092, in prepare_files
self.unpack_url(url, location, self.is_download)
File "/Users/se7entyse7en/Envs/test/lib/python2.7/site-packages/pip/req.py", line 1231, in unpack_url
return unpack_vcs_link(link, loc, only_download)
File "/Users/se7entyse7en/Envs/test/lib/python2.7/site-packages/pip/download.py", line 410, in unpack_vcs_link
vcs_backend.unpack(location)
File "/Users/se7entyse7en/Envs/test/lib/python2.7/site-packages/pip/vcs/__init__.py", line 240, in unpack
self.obtain(location)
File "/Users/se7entyse7en/Envs/test/lib/python2.7/site-packages/pip/vcs/git.py", line 111, in obtain
call_subprocess([self.cmd, 'clone', '-q', url, dest])
File "/Users/se7entyse7en/Envs/test/lib/python2.7/site-packages/pip/util.py", line 670, in call_subprocess
% (command_desc, proc.returncode, cwd))
InstallationError: Command /usr/local/bin/git clone -q ssh://git#gitlab.com:loumarvincaraig/<project_name>.git /var/folders/3r/v7swlvdn2p7_wyh9wj90td2m0000gn/T/pip-91JVFi-build failed with error code 128 in None
I don't know why, but by running the following command it worked (slash instead of : after <my_domain>):
pip install git+ssh://git#<my_domain>/se7entyse7en/<project_name>.git
# ^
# slash instead of :
Yes. This is the default use:
pip install git+ssh://git#<my_domain>:22/<project_group>/<project_name>.git
The use of the colon by itself implies the default ssh port number 22. Because you can control the port number of your server, the port number could be different. Git enables customisation by not providing :22/ or / only.