I use Python-Selenium in my spider (Scrapy), for using Selenium i should install xvfb on Scrapinghub.
when i use apt-get for installing xvfb i have this error message:
E: Could not open lock file /var/lib/dpkg/lock - open (13: Permission denied)
E: Unable to lock the administration directory (/var/lib/dpkg/), are you root?
Is there any other way for installing xvfb on Scrapinghub?
UPDATE 1
I read this, I tried to use docker, I am stuck at this stage
shub-image init --requirements path/to/requirements.txt
i read this
If you are getting an ImportError like this while running shub-image init:
You should make sure you have the latest version of shub installed by
running:
$ pip install shub --upgrade
but i have always this error
Traceback (most recent call last):
File "/usr/local/bin/shub-image", line 7, in <module>
from shub_image.tool import cli
File "/usr/local/lib/python2.7/dist-packages/shub_image/tool.py", line 42, in <module>
command_module = importlib.import_module(module_path)
File "/usr/lib/python2.7/importlib/__init__.py", line 37, in import_module
__import__(name)
File "/usr/local/lib/python2.7/dist-packages/shub_image/push.py", line 4, in <module>
from shub.deploy import list_targets
ImportError: cannot import name list_targets
did you try:
sudo apt-get install xvfb
Another way is to compile manually the packages, a sort of:
apt-get source xvfb
./configure --prefix=$HOME/myapps
make
make install
And the third way, is download the .deb from the source web page https://pkgs.org/download/xvfb
after download it, you can mv it to the path of the downloaded sources:
mv xvfb_1.16.4-1_amd64.deb /var/cache/apt/archives/
then you change your directory and do:
sudo dpkg -i xvfb_1.16.4-1_amd64.deb
and that's all!
I resolved my problems ( use selenium in scrapinghub )
1- for xvfb in docker i use
RUN apt-get install -qy xvfb
2- for creating docker image i used this
and for installing geckodriver i use this code
#
# Geckodriver Dockerfile
#
FROM blueimp/basedriver
# Add the Firefox release channel of the Debian Mozilla team:
RUN echo 'deb http://mozilla.debian.net/ jessie-backports firefox-release' >> \
/etc/apt/sources.list \
&& curl -sL https://mozilla.debian.net/archive.asc | apt-key add -
# Install Firefox:
RUN export DEBIAN_FRONTEND=noninteractive \
&& apt-get update \
&& apt-get install --no-install-recommends --no-install-suggests -y \
firefox \
# Remove obsolete files:
&& apt-get clean \
&& rm -rf \
/tmp/* \
/usr/share/doc/* \
/var/cache/* \
/var/lib/apt/lists/* \
/var/tmp/*
# Install geckodriver:
RUN export BASE_URL=https://github.com/mozilla/geckodriver/releases/download \
&& export VERSION=$(curl -sL \
https://api.github.com/repos/mozilla/geckodriver/releases/latest | \
grep tag_name | cut -d '"' -f 4) \
&& curl -sL \
$BASE_URL/$VERSION/geckodriver-$VERSION-linux64.tar.gz | tar -xz \
&& mv geckodriver /usr/local/bin/geckodriver
USER webdriver
CMD ["geckodriver", "--host", "0.0.0.0"]
from here
Related
I have the problem using google coral miniPCIe on Docker. My docker file is:
FROM python:3.8.13-slim-bullseye
LABEL Author="S"
LABEL version="1.0"
ENV PATH /usr/local/bin:$PATH
# PYTHON 3.8 and libraries
# ffmpeg x counter: # ImportError: libGL.so.1: cannot open shared object file: No such file or directory
# set timezone
RUN apt-get update && \
apt-get install -yq tzdata && \
ln -fs /usr/share/zoneinfo/Europe/Rome /etc/localtime && \
dpkg-reconfigure -f noninteractive tzdata;
RUN set -eux; \
apt-get update; \
apt-get -y install --no-install-recommends \
curl \
gnupg \
bash \
python3-dev \
apt-utils \
libglib2.0-0 \
libsm6 \
libxext6 \
libxrender-dev \
ffmpeg \
net-tools \
iputils-ping \
vim;
# CORAL PCI BOARD
# test with lspci -nn | grep 089a
RUN set -eux; \
echo "deb https://packages.cloud.google.com/apt coral-edgetpu-stable main" | tee /etc/apt/sources.list.d/coral-edgetpu.list ;\
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - ; \
apt-get update; \
apt-get -y install --no-install-recommends \
python3-pycoral gcc \
gasket-dkms \
libedgetpu1-std \
pciutils ;
RUN set -eux; \
apt-get -y install --no-install-recommends \
libatlas-base-dev \
mosquitto-clients \
sysstat ; \
rm -rf /var/lib/apt/lists/* ;
COPY ./baseimage.requirements.txt /baseimage/
RUN pip install --no-cache-dir -r /baseimage/baseimage.requirements.txt
RUN python3 -m pip install --extra-index-url https://google-coral.github.io/py-repo/ pycoral~=2.0
CMD ["/bin/bash"]
in the baseimage.requirements.txt files:
asyncua==0.9.94
Cython==0.29.23
filterpy==1.4.5
Flask==1.1.2
Flask_RESTful==0.3.8
geojson==2.5.0
imutils==0.5.4
ipython==8.4.0
keras_unet_collection==0.1.6
msgpack_numpy==0.4.7.1
msgpack_python==0.5.6
numba==0.54.0
numpy==1.19.5
opencv_python==4.1.2.30
Pillow==9.2.0
prometheus_client==0.14.1
pygeoj==1.0.0
pyminizip==0.2.4
PyMySQL==1.0.2
pynmea2==1.18.0
pyproj==3.3.1
pypylon==1.7.4
pyrealsense2==2.42.0.2924
pyserial==3.5
redis==3.5.3
requests==2.22.0
scikit_image==0.18.1
scikit_learn==1.1.1
scipy==1.6.2
setuptools==45.2.0
Shapely==1.8.2
# skimage==0.0
tensorflow==2.5.0
tflite_runtime==2.7.0
paho_mqtt==1.6.1
psutil==5.8.0
wheel==0.36.2
The docker container have mounted: -v /dev/apex_0:/dev/apex_0 and see the coral with command: lspci -nn | grep 089a
I use Python 3.8.13.
In a docker container I execute the coral example:
git clone https://github.com/google-coral/pycoral.git
cd pycoral
bash examples/install_requirements.sh classify_image.py
python3 examples/classify_image.py \
--model test_data/mobilenet_v2_1.0_224_inat_bird_quant_edgetpu.tflite \
--labels test_data/inat_bird_labels.txt \
--input test_data/parrot.jpg
And I have the error:
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/tflite_runtime/interpreter.py", line 160, in load_delegate
delegate = Delegate(library, options)
File "/usr/local/lib/python3.8/site-packages/tflite_runtime/interpreter.py", line 119, in __init__
raise ValueError(capture.message)
ValueError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "examples/classify_image.py", line 121, in <module>
main()
File "examples/classify_image.py", line 71, in main
interpreter = make_interpreter(*args.model.split('#'))
File "/usr/local/lib/python3.8/site-packages/pycoral/utils/edgetpu.py", line 87, in make_interpreter
delegates = [load_edgetpu_delegate({'device': device} if device else {})]
File "/usr/local/lib/python3.8/site-packages/pycoral/utils/edgetpu.py", line 52, in load_edgetpu_delegate
return tflite.load_delegate(_EDGETPU_SHARED_LIB, options or {})
File "/usr/local/lib/python3.8/site-packages/tflite_runtime/interpreter.py", line 162, in load_delegate
raise ValueError('Failed to load delegate from {}\n{}'.format(
ValueError: Failed to load delegate from libedgetpu.so.1
On host machine the same example works.
I'm currently evaluating Selenium in combination with GitLab CI as a testing tool for our website. This is my current .gitlab-ci.yml:
variables:
GIT_STRATEGY: clone
GIT_DEPTH: 0
stages:
- tests
test:
stage: tests
image: node:latest
tags:
- linux
before_script:
- apt-get update
- apt-get install -y chromium
- npm install -g selenium-side-runner
- npm install -g chromedriver
script:
- selenium-side-runner My-UI-Test.side
I'm getting the following error:
FAIL ./DefaultSuite.test.js
● Test suite failed to run
WebDriverError: unknown error: Chrome failed to start: exited abnormally.
(unknown error: DevToolsActivePort file doesn't exist)
(The process started from chrome location /usr/bin/chromium is no longer running, so ChromeDriver is assuming that Chrome has crashed.)
at Object.throwDecodedError (../../../../../../usr/local/lib/node_modules/selenium-side-runner/node_modules/selenium-webdriver/lib/error.js:550:15)
at parseHttpResponse (../../../../../../usr/local/lib/node_modules/selenium-side-runner/node_modules/selenium-webdriver/lib/http.js:560:13)
at Executor.execute (../../../../../../usr/local/lib/node_modules/selenium-side-runner/node_modules/selenium-webdriver/lib/http.js:486:26)
I've searched for the error message DevToolsActivePort file doesn't exist and it seems that Chrome doesn't like to be run with root privileges. A lot of answers suggest using the --no-sandbox or --disable-dev-shm-usage flags. But those are Chrome flags, and since I'm not calling Chrome directly, I can't use them. The website in question is also deployed from a different project, so I have no code to work with. The only files I can change are My-UI-Test.side and .side.yaml.
I have a separate project for my e2e tests, into which I've added a Dockerfile, my selenium .side file, as well as a config file .side.conf. This project uses the gitlab docker registry to upload the project as an image, that can be loaded directly into the gitlab-ci.
Here are my files for the e2e test project:
package.json
...
"scripts": {
"test": "selenium-side-runner test.side"
},
...
"dependencies": {
"selenium-side-runner": "^3.17.0",
"chromedriver": "^101.0.0"
}
These are the options that I am using, you might want to adjust a few things here and there. The capabilities are pretty much what you want, though.
I've also added the baseUrl key to this file instead of directly into the package.json, because I use the same image for several environments with changing URLs, that I'm replacing in my before_script whenever needed. (I left this out below, as your use case probably differs)
.side.yml
capabilities:
browserName: "chrome"
goog:chromeOptions:
binary: /usr/bin/google-chrome-stable
args:
- no-sandbox
- disable-dev-shm-usage
- headless
- nogpu
output-directory: results
output-format: junit
baseUrl: <baseURL>
The Dockerfile might include a few useless dependencies, you can likely remove a lot of them. Many of those are just copied over from my puppeteer Dockerfile, as they are using the google-chrome-stable binary very similarly. Downloading the fonts with the google-chrome-stable binary might not be needed in your case as well. So just adjust it to your needs.
Dockerfile
FROM node:14
RUN apt update
RUN apt install -y \
rsync \
grsync \
gnupg \
ca-certificates \
fonts-liberation \
libappindicator3-1 \
libasound2 \
libatk-bridge2.0-0 \
libatk1.0-0 \
libc6 \
libcairo2 \
libcups2 \
libdbus-1-3 \
libexpat1 \
libfontconfig1 \
libgbm1 \
libgcc1 \
libglib2.0-0 \
libgtk-3-0 \
libnspr4 \
libnss3 \
libpango-1.0-0 \
libpangocairo-1.0-0 \
libstdc++6 \
libx11-6 \
libx11-xcb1 \
libxcb1 \
libxcomposite1 \
libxcursor1 \
libxdamage1 \
libxext6 \
libxfixes3 \
libxi6 \
libxrandr2 \
libxrender1 \
libxss1 \
libxtst6 \
lsb-release \
wget \
xdg-utils
RUN apt-get update \
&& apt-get install -y wget gnupg \
&& wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add - \
&& sh -c 'echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google.list' \
&& apt-get update \
&& apt-get install -y google-chrome-stable fonts-ipafont-gothic fonts-wqy-zenhei fonts-thai-tlwg fonts-kacst fonts-freefont-ttf libxss1 \
--no-install-recommends \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /app
COPY package.json /app
COPY .side.yml /app
COPY test.side /app
COPY results /app
RUN npm i -g chromedriver --unsafe-perm
RUN npm i -g selenium-side-runner --unsafe-perm
RUN npm install
And here I include it into my gitlab CI
gitlab-ci.yml:
e2e:
stage: test
image: <image-of-above-project>:1.0
variables:
GIT_STRATEGY: none
script:
- cat .side.yml
- npm run test
If you need more information on container registry, head to here: https://docs.gitlab.com/ee/user/packages/container_registry/
I am having issues to run the version 92 with the browser open, in headless mode is working fine.
I am currently running my tests on a docker container. The installation of the chrome-driver follows:
RUN curl -s https://dl.google.com/linux/linux_signing_key.pub | apt-key add - \
&& echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" > /etc/apt/sources.list.d/google-chrome.list && \
apt-get update && \
apt-get install -y \
xvfb \
google-chrome-stable=92.\* \
unzip
RUN curl -s -o /tmp/chromedriver.zip "https://chromedriver.storage.googleapis.com/$(curl -s https://chromedriver.storage.googleapis.com/LATEST_RELEASE_92)/chromedriver_linux64.zip" && \
unzip /tmp/chromedriver.zip chromedriver -d /usr/bin/ && \
chmod +x /usr/bin/chromedriver
after executing my command to run the tests, the browser open but nothing happens.
I get stuck on data; on the address bar, and page keeps loading forever.
I'm currently using the chrome options:
chrome_options = webdriver.ChromeOptions()
chrome_options.add_argument("--no-sandbox")
chrome_options.add_argument("--incognito")
chrome_options.add_argument("--disable-dev-shm-usage")
Looks like it was related to an old Nvidia driver running on Linux:
You may need to run with --disable-gpu on Linux with NVIDIA driver older than 295.20
Source: chromium documentation
So, adding chrome_options.add_argument("--disable-gpu") solved my problem.
I am trying to build a docker image for my selenium tests. However i keep getting error message " org.openqa.selenium.WebDriverException: unknown error: Chrome failed to start: crashed" .
Please do not mark this as Duplicate ,though I have referred to a lot of answers provided in the links below. I am still not able to get through this. I have tried all the answers that are provided but no luck .
Selenium: WebDriverException:Chrome failed to start: crashed as google-chrome is no longer running so ChromeDriver is assuming that Chrome has crashed
WebDriverException: unknown error: DevToolsActivePort file doesn't exist while trying to initiate Chrome Browser
Please find the docker file code and my selenium code.
Docker file code looks like this :
FROM selenium/standalone-chrome
FROM gradle
RUN gradle wrapper
USER root
RUN apt-get update; apt-get -y install wget gnupg2
RUN wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub |
apt-key add -
RUN echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable
main" >> /etc/apt/sources.list.d/google-chrome.list
RUN apt-get update; apt-get -y install google-chrome-stable
COPY . /project
RUN chown -R gradle:gradle /project
RUN wget -N
http://chromedriver.storage.googleapis.com/76.0.3809.25/chromedriver_linux64.zip -P ~/
RUN unzip ~/chromedriver_linux64.zip -d ~/
RUN rm ~/chromedriver_linux64.zip
RUN mv -f ~/chromedriver /project/executables/chromedriver
RUN chown gradle:gradle /project/executables/chromedriver
RUN chmod 0755 /project/executables/chromedriver
USER gradle
WORKDIR /project
ENV GRADLE_USER_HOME /project/.gradle_home
CMD gradle build --info
Selenium code :
ChromeOptions chromeOptions = new ChromeOptions();
chromeOptions.addArguments("--headless");
chromeOptions.addArguments("start-maximized"); // open Browser in maximized mode
chromeOptions.addArguments("disable-infobars"); // disabling infobars
chromeOptions.addArguments("--disable-extensions"); // disabling extensions
chromeOptions.addArguments("--disable-gpu"); // applicable to windows os only
chromeOptions.addArguments("--disable-dev-shm-usage"); // overcome limited resource problems
chromeOptions.addArguments("--no-sandbox"); // Bypass OS security model
System.setProperty("webdriver.chrome.driver","executables/chromedriver");
Webdriver driver = new ChromeDriver(chromeOptions);
driver.get("http://google.com");
As you can see from the error message the chrome is starting at default location(usr/bin/google-chrome) but it is crashing .
Starting ChromeDriver 76.0.3809.25 (a0c95f440512e06df1c9c206f2d79cc20be18bb1-refs/branch-heads/3809#{#271}) on port 30275
Only local connections are allowed.
" org.openqa.selenium.WebDriverException: unknown error: Chrome failed to start: crashed" .
(unknown error: DevToolsActivePort file doesn't exist)
(The process started from chrome location /usr/bin/google-chrome is no longer running, so ChromeDriver is assuming that Chrome has crashed.)
System info: host: 'd2e61fa0170d', ip: '172.17.0.2', os.name: 'Linux', os.arch: 'amd64', os.version: '4.9.125-linuxkit', java.version: '1.8.0_212'
Driver info: driver.version: ChromeDriver
I am using latest chrome driver 76.0.3809.25. I am assuming that latest google chrome is fetched and installed
Any help is appreciated
Seems like you are having issues with installing google chrome and its driver. Sharing you my Dockerfile and Docker-compose.yml. I achieved this using python. It also has the example for Firefox and PhantomJS.
FROM ubuntu:bionic
RUN apt-get update && apt-get install -y \
python3 python3-pip \
fonts-liberation libappindicator3-1 libasound2 libatk-bridge2.0-0 \
libnspr4 libnss3 lsb-release xdg-utils libxss1 libdbus-glib-1-2 \
curl unzip wget \
xvfb
# install geckodriver and firefox
RUN GECKODRIVER_VERSION=`curl https://github.com/mozilla/geckodriver/releases/latest | grep -Po 'v[0-9]+.[0-9]+.[0-9]+'` && \
wget https://github.com/mozilla/geckodriver/releases/download/$GECKODRIVER_VERSION/geckodriver-$GECKODRIVER_VERSION-linux64.tar.gz && \
tar -zxf geckodriver-$GECKODRIVER_VERSION-linux64.tar.gz -C /usr/local/bin && \
chmod +x /usr/local/bin/geckodriver && \
rm geckodriver-$GECKODRIVER_VERSION-linux64.tar.gz
RUN FIREFOX_SETUP=firefox-setup.tar.bz2 && \
apt-get purge firefox && \
wget -O $FIREFOX_SETUP "https://download.mozilla.org/?product=firefox-latest&os=linux64" && \
tar xjf $FIREFOX_SETUP -C /opt/ && \
ln -s /opt/firefox/firefox /usr/bin/firefox && \
rm $FIREFOX_SETUP
# install chromedriver and google-chrome
RUN CHROMEDRIVER_VERSION=`curl -sS chromedriver.storage.googleapis.com/LATEST_RELEASE` && \
wget https://chromedriver.storage.googleapis.com/$CHROMEDRIVER_VERSION/chromedriver_linux64.zip && \
unzip chromedriver_linux64.zip -d /usr/bin && \
chmod +x /usr/bin/chromedriver && \
rm chromedriver_linux64.zip
RUN CHROME_SETUP=google-chrome.deb && \
wget -O $CHROME_SETUP "https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb" && \
dpkg -i $CHROME_SETUP && \
apt-get install -y -f && \
rm $CHROME_SETUP
# install phantomjs
RUN wget https://bitbucket.org/ariya/phantomjs/downloads/phantomjs-2.1.1-linux-x86_64.tar.bz2 && \
tar -jxf phantomjs-2.1.1-linux-x86_64.tar.bz2 && \
cp phantomjs-2.1.1-linux-x86_64/bin/phantomjs /usr/local/bin/phantomjs && \
rm phantomjs-2.1.1-linux-x86_64.tar.bz2
RUN pip3 install selenium
RUN pip3 install pyvirtualdisplay
RUN pip3 install Selenium-Screenshot
RUN pip3 install requests
RUN pip3 install pytest
ENV LANG C.UTF-8
ENV LC_ALL C.UTF-8
ENV PYTHONUNBUFFERED=1
ENV APP_HOME /usr/src/app
WORKDIR /$APP_HOME
COPY . $APP_HOME/
CMD tail -f /dev/null
CMD python3 example.py
Docker-compose.yml
selenium:
build: .
ports:
- 4000:4000
- 443:443
volumes:
- ./data/:/data/
privileged: true
TL;DR - Dockerized Ambari on Ubuntu 14.04 Docker container throws error upon startup with default configurations
I'm attempting to Dockerize an Ambari deployment to support running it along side my Hadoop containers. Here is my Dockerfile:
FROM ubuntu:14.04
ENV AMBARI_HOME /opt/ambari
ENV AMBARI_VERSION 2.2.0.0
RUN export DEBIAN_FRONTEND=noninteractive \
&& apt-get update \
&& apt-get -y install wget software-properties-common python-software-properties openssh-client openssh-server
# Install Java.
RUN \
echo oracle-java8-installer shared/accepted-oracle-license-v1-1 select true | debconf-set-selections && \
add-apt-repository -y ppa:webupd8team/java && \
apt-get update && \
apt-get install -y oracle-java8-installer && \
rm -rf /var/lib/apt/lists/* && \
rm -rf /var/cache/oracle-jdk8-installer
# Define commonly used JAVA_HOME variable
ENV JAVA_HOME /usr/lib/jvm/java-8-oracle
RUN mkdir -p "$AMBARI_HOME"
WORKDIR $AMBARI_HOME
# passwordless ssh
RUN export DEBIAN_FRONTEND=noninteractive \
&& echo -e 'y\n'|ssh-keygen -q -t rsa -N "" -f /root/.ssh/id_rsa \
&& cp /root/.ssh/id_rsa.pub /root/.ssh/authorized_keys
RUN export DEBIAN_FRONTEND=noninteractive \
&& wget -nv http://public-repo-1.hortonworks.com/ambari/ubuntu14/2.x/updates/2.2.0.0/ambari.list -O /etc/apt/sources.list.d/ambari.list \
&& apt-key adv --recv-keys --keyserver keyserver.ubuntu.com B9733A7A07513CAD \
&& apt-get update \
&& apt-get -y install ambari-server
#Disable SELinux
RUN echo SELINUX=disabled >> /etc/selinux/config
EXPOSE 8080
RUN ambari-server setup -s --verbose --java-home $JAVA_HOME
CMD ambari-server start
When I start the container I get the following error -
Using python /usr/bin/python2
Starting ambari-server
Ambari Server running with administrator privileges.
About to start PostgreSQL
Organizing resource files at /var/lib/ambari-server/resources...
WARNING: setpgid(73, 0) failed - [Errno 13] Permission denied
Server PID at: /var/run/ambari-server/ambari-server.pid
Server out at: /var/log/ambari-server/ambari-server.out
Server log at: /var/log/ambari-server/ambari-server.log
Waiting for server start.........
ERROR: Exiting with exit code -1.
REASON: Ambari Server java process died with exitcode -1. Check /var/log/ambari-server/ambari-server.out for more information.
There doesn't seem to be anything useful in the ambari-server.log or .out
I found an issue for WARNING: setpgid(73, 0) failed - [Errno 13] Permission denied fixed here: setpgid issue
From reading the HortonWorks docs for deploying to Ubuntu 14.04, this should work:
Install Ambari on Ubuntu 14.04
I've tried to deploy with the embedded Postges as well as an external one with the same results.
One interesting note is that even with the error, Ambari appears to be up and I can login as the default admin/admin, but when calling `ambari-server stop' it says no process is running...
root#3e6d778b43f8:/opt/ambari# ambari-server stop
Using python /usr/bin/python2
Stopping ambari-server
Ambari Server is not running
root#3e6d778b43f8:/opt/ambari# jps
868 AmbariServer
955 Jps
I'll replicate this setup on my Ubuntu box tomorrow and see if the same thing happens.
Thanks!
Edit #1: docker info
vagrant#vagrant-ubuntu-trusty-64:/vagrant/scripts$ docker info
Containers: 14
Images: 161
Server Version: 1.9.1
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 189
Dirperm1 Supported: false
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 3.13.0-44-generic
Operating System: Ubuntu 14.04.1 LTS
CPUs: 1
Total Memory: 3.861 GiB
Name: vagrant-ubuntu-trusty-64
ID: 7AD6:Z5TH:76NW:G54B:IHVK:PWKP:E2LI:CRPI:MIGM:STJU:3D2B:K7EQ
WARNING: No swap limit support
vagrant#vagrant-ubuntu-trusty-64:/vagrant/scripts$ docker version
Client:
Version: 1.9.1
API version: 1.21
Go version: go1.4.2
Git commit: a34a1d5
Built: Fri Nov 20 13:12:04 UTC 2015
OS/Arch: linux/amd64
Server:
Version: 1.9.1
API version: 1.21
Go version: go1.4.2
Git commit: a34a1d5
Built: Fri Nov 20 13:12:04 UTC 2015
OS/Arch: linux/amd64
docker is running inside of a Vagrant Virtualbox instance (v1.8.1)
I had same problem with ambari-server inside docker on ubuntu 14.04. Could you try the following
Workaround the aufs problem
Inside /etc/default/docker add
DOCKER_OPTS="--storage-driver=devicemapper"
and restart the docker service. Note that after this all your images will disappear (http://muehe.org/posts/switching-docker-from-aufs-to-devicemapper/). Rebuild your images.
To be honest I'm not 100% sure if this part is really needed.
After switching from aufs to devicemapper you might get the following error:
ERROR: Could not find container for entity id
The solution was to remove the old AUFS db and any existing containers:
sudo rm -rf /var/lib/docker/containers/*
sudo rm -rf /var/lib/docker/linkgraph.db
Restarting your docker images/containers should now work on the devicemapper engine.
Put apparmor into complain mode for docker
Inside /etc/apparmor.d/docker comment out (#) line deny #{PROC}/{*,**^[0-9*],sys/kernel/shm*} wkx,, it somehow confuses apparmor utils. Than run
sudo aa-complain /etc/apparmor.d/docker
If aa-complain throws command not found, install:
sudo apt-get install apparmor-utils
After starting the container ambari-server started working for me.
I dont know how docker relies here on apparmor, i.e. what risks the operation above introduces...
It looks like there's an issue deploying Ambari to a docker container.. I broke it out and installed it onto a Vagrant 14.04 Ubuntu VM wit the following scripts:
install_java.sh
#!/bin/bash
echo oracle-java8-installer shared/accepted-oracle-license-v1-1 select true | debconf-set-selections && \
add-apt-repository -y ppa:webupd8team/java && \
apt-get update && \
apt-get install -y oracle-java8-installer && \
rm -rf /var/lib/apt/lists/* && \
rm -rf /var/cache/oracle-jdk8-installer
install_ambari.sh
#!/bin/bash
export DEBIAN_FRONTEND=noninteractive \
&& wget -nv http://public-repo-1.hortonworks.com/ambari/ubuntu14/2.x/updates/2.2.0.0/ambari.list -O /etc/apt/sources.list.d/ambari.list \
&& apt-key adv --recv-keys --keyserver keyserver.ubuntu.com B9733A7A07513CAD \
&& apt-get update \
&& apt-get -y install ambari-server
Followed by:
sudo ambari-server setup -s -v -j $JAVA_HOME
sudo ambari-server start -v
#thaJeztah - what do I need to fix with my Dockerfile setup?