Is it possible to use pip to install a package over ssh in a self-hosted gitlab? - ssh

I have a self-hosted gitlab and I would like to install a package hosted there using ssh.
I tried:
pip install git+ssh://git#<my_domain>:se7entyse7en/<project_name>.git
Here's the output:
Downloading/unpacking git+ssh://git#<my_domain>:se7entyse7en/<project_name>.git
Cloning ssh://git#<my_domain>:se7entyse7en/<project_name>.git to /var/folders/3r/v7swlvdn2p7_wyh9wj90td2m0000gn/T/pip-4_JdRU-build
ssh: Could not resolve hostname <my_domain>:se7entyse7en: nodename nor servname provided, or not known
fatal: Could not read from remote repository.
Please make sure you have the correct access rights and the repository exists.
Update:
I tried to upload it on gitlab.com and after having uploaded the repo I tried to install it by running:
pip install git+ssh://git#gitlab.com:loumarvincaraig/<project_name>.git
but the nothing changed. In particular here's the content of pip.log:
/Users/se7entyse7en/Envs/test/bin/pip run on Mon Nov 17 22:14:51 2014
Downloading/unpacking git+ssh://git#gitlab.com:loumarvincaraig/<project_name>.git
Cloning ssh://git#gitlab.com:loumarvincaraig/<project_name>.git to /var/folders/3r/v7swlvdn2p7_wyh9wj90td2m0000gn/T/pip-91JVFi-build
Found command 'git' at '/usr/local/bin/git'
Running command /usr/local/bin/git clone -q ssh://git#gitlab.com:loumarvincaraig/<project_name>.git /var/folders/3r/v7swlvdn2p7_wyh9wj90td2m0000gn/T/pip-91JVFi-build
Complete output from command /usr/local/bin/git clone -q ssh://git#gitlab.com:loumarvincaraig/<project_name>.git /var/folders/3r/v7swlvdn2p7_wyh9wj90td2m0000gn/T/pip-91JVFi-build:
Cleaning up...
Command /usr/local/bin/git clone -q ssh://git#gitlab.com:loumarvincaraig/<project_name>.git /var/folders/3r/v7swlvdn2p7_wyh9wj90td2m0000gn/T/pip-91JVFi-build failed with error code 128 in None
Exception information:
Traceback (most recent call last):
File "/Users/se7entyse7en/Envs/test/lib/python2.7/site-packages/pip/basecommand.py", line 134, in main
status = self.run(options, args)
File "/Users/se7entyse7en/Envs/test/lib/python2.7/site-packages/pip/commands/install.py", line 236, in run
requirement_set.prepare_files(finder, force_root_egg_info=self.bundle, bundle=self.bundle)
File "/Users/se7entyse7en/Envs/test/lib/python2.7/site-packages/pip/req.py", line 1092, in prepare_files
self.unpack_url(url, location, self.is_download)
File "/Users/se7entyse7en/Envs/test/lib/python2.7/site-packages/pip/req.py", line 1231, in unpack_url
return unpack_vcs_link(link, loc, only_download)
File "/Users/se7entyse7en/Envs/test/lib/python2.7/site-packages/pip/download.py", line 410, in unpack_vcs_link
vcs_backend.unpack(location)
File "/Users/se7entyse7en/Envs/test/lib/python2.7/site-packages/pip/vcs/__init__.py", line 240, in unpack
self.obtain(location)
File "/Users/se7entyse7en/Envs/test/lib/python2.7/site-packages/pip/vcs/git.py", line 111, in obtain
call_subprocess([self.cmd, 'clone', '-q', url, dest])
File "/Users/se7entyse7en/Envs/test/lib/python2.7/site-packages/pip/util.py", line 670, in call_subprocess
% (command_desc, proc.returncode, cwd))
InstallationError: Command /usr/local/bin/git clone -q ssh://git#gitlab.com:loumarvincaraig/<project_name>.git /var/folders/3r/v7swlvdn2p7_wyh9wj90td2m0000gn/T/pip-91JVFi-build failed with error code 128 in None

I don't know why, but by running the following command it worked (slash instead of : after <my_domain>):
pip install git+ssh://git#<my_domain>/se7entyse7en/<project_name>.git
# ^
# slash instead of :

Yes. This is the default use:
pip install git+ssh://git#<my_domain>:22/<project_group>/<project_name>.git
The use of the colon by itself implies the default ssh port number 22. Because you can control the port number of your server, the port number could be different. Git enables customisation by not providing :22/ or / only.

Related

Accessing Kaggle tools in VM by mounting key

I am trying to use kaggle command line tool and I am running into problems with using it inside my own vm. I downloaded the API token from the site and placed it in /.kaggle/kaggle.json on windows. My vm has ubuntu installed and in the Vagrant file I have the following:
config.vm.synced_folder ENV['HOME'] + "/.kaggle", "/home/ubuntu/.kaggle", mount_options: ['dmode=700,fmode=700']
config.vm.provision "shell", inline: <<-SHELL
echo "export KAGGLE_CONFIG_DIR='/home/ubuntu/.kaggle/kaggle.json'" >> /etc/profile.d/myvar.sh
SHELL
when running env command in the vm I see it is correct:
KAGGLE_CONFIG_DIR=/home/ubuntu/.kaggle/kaggle.json
However, when I try to use the kaggle command for example kaggle -h I get the the following
(main) vagrant#dev:/home/ubuntu/.kaggle$ ls
kaggle.json
(main) vagrant#dev:/home/ubuntu/.kaggle$ kaggle -h
Traceback (most recent call last):
File "/user/home/venvs/main/bin/kaggle", line 5, in <module>
from kaggle.cli import main
File "/user/home/venvs/main/lib/python3.7/site-packages/kaggle/__init__.py", line 23, in <module>
api.authenticate()
File "/user/home/venvs/main/lib/python3.7/site-packages/kaggle/api/kaggle_api_extended.py", line 149, in authenticate
self.config_file, self.config_dir))
OSError: Could not find kaggle.json. Make sure it's located in /home/ubuntu/.kaggle/kaggle.json. Or use the environment method.
The paths are all correct and the file is where it should be looking for it. Anyone know what the issue could be? Is it because it is mounted?
Alright, I misread the instructions: "You can define a shell environment variable KAGGLE_CONFIG_DIR to change this location to $KAGGLE_CONFIG_DIR/kaggle.json"
So the env variable should be /home/ubuntu/.kaggle/ instead of /home/ubuntu/.kaggle/kaggle.json.

OSError: [Errno 8] when running selenium in python in a docker container

I've recently learned the basics of Docker and how to create and run images. I'm trying to create an image of a python script that scrapes some webpages for data and uploads it to a server. I'm using Selenium, Chromium, and a Windows chromedriver. I'm trying to build the image on my Windows machine and be able to deploy it on a bunch of Linux/Windows servers. Currently, I'm only building and running on the same Windows machine, just until I get it running, but I keep getting the same error, even though the script runs just fine directly on the machine itself.
This is the error:
Traceback (most recent call last):
File "my-app.py", line 796, in <module>
startScraper();
File "my-app.py", line 92, in startScraper
browser = webdriver.Chrome(chrome_options = options, executable_path = path_to_chromedriver);
File "/usr/local/lib/python3.6/site-packages/selenium/webdriver/chrome/webdriver.py", line 62, in __init__
self.service.start()
File "/usr/local/lib/python3.6/site-packages/selenium/webdriver/common/service.py", line 74, in start
stdout=self.log_file, stderr=self.log_file)
File "/usr/local/lib/python3.6/subprocess.py", line 707, in __init__
restore_signals, start_new_session)
File "/usr/local/lib/python3.6/subprocess.py", line 1326, in _execute_child
raise child_exception_type(errno_num, err_msg)
OSError: [Errno 8] Exec format error
It seems to be related to the chrome options but even when I remove all the "add-argument" options, the error persists, but here are the options:
options = webdriver.ChromeOptions();
options.binary_location = './chrome-win32/chrome.exe';
options.add_argument('headless')
options.add_argument('window-size=1400x1300')
options.add_argument('--mute-audio')
options.add_argument('--disable-web-security');
options.add_argument('--allow-running-insecure-content');
options.add_argument('--ignore-certificate-errors')
options.add_argument('--ignore-ssl-errors')
prefs = {"profile.managed_default_content_settings.images":2}
options.add_experimental_option("prefs", prefs);
path_to_chromedriver = './chromedriver.exe';
Is there anything that I'm missing to be able to run this scraper in a container? Thanks!
EDIT: I forgot to add the Dockerfile and how I build/run the image:
Dockerfile:
FROM python:3.6.0
WORKDIR /my-app
ADD . /my-app
RUN pip install -r requirements.txt
ENV NAME Scraper
CMD ["python", "My_App.py"]
Build/Run image:
- docker build -t myapp
- docker run myapp
Maybe there are some options that I don't know about that I'm missing?
You are trying to run a exe inside a linux container and that is not going to work. You will need to install chrome and chromedriver inside your Dockerfile and update the code to use the correct path
FROM python:3.6.0
RUN apt update && apt install -y chromedriver
WORKDIR /my-app
ADD . /my-app
RUN pip install -r requirements.txt
ENV NAME Scraper
CMD ["python", "My_App.py"]
Change your code to
options = webdriver.ChromeOptions();
options.add_argument('headless')
options.add_argument('window-size=1400x1300')
options.add_argument('--mute-audio')
options.add_argument('--disable-web-security');
options.add_argument('--allow-running-insecure-content');
options.add_argument('--ignore-certificate-errors')
options.add_argument('--ignore-ssl-errors')
prefs = {"profile.managed_default_content_settings.images":2}
options.add_experimental_option("prefs", prefs);
path_to_chromedriver = '/usr/lib/chromium/chromedriver';

celery with redis - unix timeout

I have an app using celery for async-task and i use redis for its broker and result backend and i set redis to use unix socket.
here is my url for celery and broker
brok = 'redis+socket://:ABc#/tmp/redis.sock'
app = Celery('NTWBT', backend=brok, broker=brok)
app.conf.update(
BROKER_URL=brok,
BROKER_TRANSPORT_OPTIONS={
"visibility_timeout": 3600
},
CELERY_RESULT_BACKEND=brok,
CELERY_ACCEPT_CONTENT=['pickle', 'json', 'msgpack', 'yaml'],
)
but every time i add a job celery gives me this error
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/celery/app/trace.py", line 283, in trace_task
uuid, retval, SUCCESS, request=task_request,
File "/usr/local/lib/python2.7/dist-packages/celery/backends/base.py", line 257, in store_result
request=request, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/celery/backends/base.py", line 491, in _store_result
self.set(self.get_key_for_task(task_id), self.encode(meta))
File "/usr/local/lib/python2.7/dist-packages/celery/backends/redis.py", line 160, in set
return self.ensure(self._set, (key, value), **retry_policy)
File "/usr/local/lib/python2.7/dist-packages/celery/backends/redis.py", line 149, in ensure
**retry_policy
File "/usr/local/lib/python2.7/dist-packages/kombu/utils/__init__.py", line 246, in retry_over_time
return fun(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/celery/backends/redis.py", line 169, in _set
pipe.execute()
File "/usr/local/lib/python2.7/dist-packages/redis/client.py", line 2620, in execute
self.shard_hint)
File "/usr/local/lib/python2.7/dist-packages/redis/connection.py", line 897, in get_connection
connection = self.make_connection()
File "/usr/local/lib/python2.7/dist-packages/redis/connection.py", line 906, in make_connection
return self.connection_class(**self.connection_kwargs)
TypeError: __init__() got an unexpected keyword argument 'socket_connect_timeout'
which option should i use for celery to donot set timeout for its redis connection?
it seems that this problem is related to a version of a redis-server installed on your system, the socket_connect_timeout was first introduced in redis 2.10.0.
so you need to update your version of redis.
if you are running on ubuntu server you can install the official apt repositories:
$ sudo apt-get install -y python-software-properties
$ sudo add-apt-repository -y ppa:rwky/redis
$ sudo apt-get update
$ sudo apt-get install -y redis-server
and update to the last version of celery.
This is the github issue in celery because not only you run into this problem : https://github.com/celery/celery/issues/2903
and if everything not work in any way for you, i suggest to use rabbitmq instead of Redis :
$ sudo apt-get install rabbitmq-server
$ sudo pip install librabbitmq
and in your app configure celery with this CELERY_BROKER_URL:
'amqp://guest:guest#localhost:5672//'
i hope this answer will fit all your needs.
Cheers
There are bugs in several libraries that causes this exception in Celery:
https://github.com/celery/celery/issues/2903
https://github.com/celery/kombu/pull/590
If you use Redis with UNIX socket as the broker, there's no easy fix yet. Unless you monkey-patch celery, kombu and/or redis-py libraries...
For now, I recommend that you use Redis with TCP connection, or switch to another broker, e.g. RabbitMQ.

pkg_resources.DistributionNotFound for Ryu-oe

My goal is to have a optical LINC switch running and use Ryu-oe to control it.
I receive the following error when I try to run Ryu-Oe instruction from this link.
Ryu-oe is just ryu controller with some optical extensions.
File "/usr/local/bin/ryu-manager", line 5, in <module>
from pkg_resources import load_entry_point
File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 2749, in <module>
working_set = WorkingSet._build_master()
File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 446, in _build_master
return cls._build_from_requirements(__requires__)
File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 459, in _build_from_requirements
dists = ws.resolve(reqs, Environment())
File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 628, in resolve
raise DistributionNotFound(req)
pkg_resources.DistributionNotFound: msgpack-python>=0.4.0
Anyone knows how I can solve the error?
Ok it seems that the problem is solved. To be honest I don't know how it was solved. Here are some of the commands I ran:
Make sure you are in ryu-oe directory.
sudo -H ./run_tests.sh
sudo ./run_tests.sh
sudo -H python ./setup.py install
and then I ran sudo ryu-manager ~/ryu-oe/ryu/app/ofctl_rest.py.
Let me know which one worked for you so that we come up with a better answer.
This command worked for me:
$ sudo pip install --upgrade msgpack-python

Internal Server Error right after installation of OpenERP 7.0

I am new at OpenERP and I just installed OpenERP 7.0 on Ubuntu 12.04 using the All-In-One ".deb" file. But when I tried to open it it gave me this error message:
Internal Server Error: The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.
I checked the "openerp-server.log" file and it gave me this:
self.gen.next()
File "/usr/share/pyshared/openerp/addons/web/http.py", line 422, in session_context
session_store.save(request.session)
File "/usr/share/pyshared/werkzeug/contrib/sessions.py", line 237, in savedir=self.path)
File "/usr/lib/python2.7/tempfile.py", line 300, in mkstemp
return _mkstemp_inner(dir, prefix, suffix, flags)
File "/usr/lib/python2.7/tempfile.py", line 235, in _mkstemp_inner
fd = _os.open(file, flags, 0600)
OSError: [Errno 13] Permission non accordée: '/tmp/oe-sessions-openerp/tmpNUQsbf.__wz_sess'
What is going wrong and how can I fix it?
Thanks!
It looks like a permission issue. You can check permissions of your server/addons/web directory and change it's to Read/Write/Create/Delete like this
chmod 777 DIRPATH_OF_SERVER -R
chmod 777 DIRPATH_OF_ADDONS -R
chmod 777 DIRPATH_OF_WEB -R
By assigning all permissions, Can you re-check it ?