I sued ubuntu 18.04 with gcc 7.04 and python 2,7,15 and protobuf 3 and swig 3,0,12.when I run ARM-fullsystem, I have problem:
my setting for fullsytem is:
1. sudo su
2. mkdir fullsystem
cd fullsystem
wget http://www.gem5.org/dist/current/arm/arm-system-2011-08.tar.bz2
echo export M5_PATH=/home/farideh/gem5/fullsystem/>> ~/.bashrc
gedit ~/.bashrc
echo $M5_PATH
source ~/.bashrc
configs/common/SysPaths.py , path = [ '/dist/m5/system', '/home/farideh/gem5/fullsystem' ]
configs/common/Benchmarks.py ,
elif buildEnv['TARGET_ISA'] == 'arm':
return env.get('LINUX_IMAGE', disk('linaro-minimal-aarch64.img'))
//////////////////////////////////////////
then running this command:
warn: Kernel panic in simulated kernel
/////////////////////////
I don't know what to do with this.
Related
I found similar case here, that I am using molecule to test my ansible roles, but for some reason it is skipping "creation" part and gives error like:
fatal: [rabbitmq]: UNREACHABLE! => {"changed": false, "msg": "Failed to create temporary directory.In some cases, you may have been able to authenticate and did not have permissions on the target directory. Consider changing the remote tmp path in ansible.cfg to a path rooted in \"/tmp\", for more error information use -vvv. Failed command was: ( umask 77 && mkdir -p \"` echo ~/.ansible/tmp `\"&& mkdir \"` echo ~/.ansible/tmp/ansible-tmp-1638541586.6239848-828-250053975102429 `\" && echo ansible-tmp-1638541586.6239848-828-250053975102429=\"` echo ~/.ansible/tmp/ansible-tmp-1638541586.6239848-828-250053975102429 `\" ), exited with result 1", "unreachable": true}
It is skipping the create process: Skipping, instances already created. However, nothing is running:
name#EEW00438:~/.cache$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
name#EEW00438:~/.cache$
what I tried:
molecule destroy
molecule reset
restart
rm -rf ~/.cache/
changed remote_tmp to /tmp/.ansible/ in /etc/ansible/ansible.cfg
reinstall molecule
This issue is only with one role.
UPDATE:
it is failing on step:
mkdir \"` echo ~/.ansible/tmp/ansible-tmp-1638782939.31706-2913-12516475286623 `\" && echo ansible-tmp-1638782939.31706-2913-12516475286623=
mkdir: cannot create directory ‘"/home/user/.ansible/tmp/ansible-tmp-1638782939.31706-2913-12516475286623"’: No such file or directory
I stumbled upon this issue as well.
When you create the role you need to create it as molecule init role --driver-name docker ns.myrole to enable docker. Be sure to install the docker driver too if you haven't pip install --upgrade molecule-docker
So if you need to tweak the container that runs, edit molecule.yml. It defaults to centos. I switched to ubuntu in there, an created a Dockerfile to provision the container with things that need to exist.
molecule.yml
---
dependency:
name: galaxy
driver:
name: docker
platforms:
- name: instance
image: ubuntu:22.04 # this is required but ignored since I specify a `dockerfile`
pre_build_image: false
dockerfile: Dockerfile
provisioner:
name: ansible
verifier:
name: ansible
For example, Ubuntu 22.04 doesn't use python anymore, so I added an alias at the end of what molecule renders so that Ansible can use python and have it redirect to python3
FROM ubuntu:22.04
RUN if [ $(command -v apt-get) ]; then export DEBIAN_FRONTEND=noninteractive && apt-get update && apt-get install -y python3 sudo bash ca-certificates iproute2 python3-apt aptitude && apt-get clean && rm -rf /var/lib/apt/lists/*; \
elif [ $(command -v dnf) ]; then dnf makecache && dnf --assumeyes install /usr/bin/python3 /usr/bin/python3-config /usr/bin/dnf-3 sudo bash iproute && dnf clean all; \
elif [ $(command -v yum) ]; then yum makecache fast && yum install -y /usr/bin/python /usr/bin/python2-config sudo yum-plugin-ovl bash iproute && sed -i 's/plugins=0/plugins=1/g' /etc/yum.conf && yum clean all; \
elif [ $(command -v zypper) ]; then zypper refresh && zypper install -y python3 sudo bash iproute2 && zypper clean -a; \
elif [ $(command -v apk) ]; then apk update && apk add --no-cache python3 sudo bash ca-certificates; \
elif [ $(command -v xbps-install) ]; then xbps-install -Syu && xbps-install -y python3 sudo bash ca-certificates iproute2 && xbps-remove -O; fi
RUN echo 'alias python=python3' >> ~/.bashrc
It's been years since I last used Molecule, and I must say... it's gone downhill. It used to be easy/clear/direct to get things working. Sigh. I guess I should stick to containers and force the migration off VMs sooner!
The problem may be caused by a Docker context change performed at the start of Docker Desktop. Despite this, Molecule does create a container, but in an inactive context.
At startup, Docker Desktop automatically switches the context from default to desktop-linux [1]. The active context determines which containers are available from CLI.
The context cannot be set in the molecule, i.e. the default context is always used to create containers [2].
$ molecule create --scenario-name test
... # The output with the error is skipped because it duplicates the output from the question
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
$ docker context ls
NAME TYPE DESCRIPTION DOCKER ENDPOINT KUBERNETES ENDPOINT ORCHESTRATOR
default moby Current DOCKER_HOST based configuration unix:///var/run/docker.sock swarm
desktop-linux * moby unix:///home/bkarpov/.docker/desktop/docker.sock
$ docker context use default
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a71bfd28992f geerlingguy/docker-ubuntu2004-ansible "bash -c 'while true…" 5 minutes ago Up 5 minutes some-instance
$ molecule login --scenario-name test
INFO Running test > login
root#some-instance:/#
Solutions
Switch the context back to default manually
docker context use default
This solution is suitable for one-time execution, since the context will need to be switched every time Docker Desktop is started. Docker Desktop service will continue to work using the desktop-linux context.
Issue with the request to add context switching to Docker Desktop - https://github.com/docker/roadmap/issues/47
Stop Docker Desktop
systemctl --user stop docker-desktop
Stopping the Docker Desktop service will automatically switch to the default context.
Set DOCKER_CONTEXT so that Docker Desktop does not change the context in the current shell
export DOCKER_CONTEXT=default
systemctl --user restart docker-desktop
When stopping, the context returns to default, and when starting, it does not switch to desktop-linux.
References
https://docs.docker.com/desktop/install/ubuntu/#launch-docker-desktop
https://github.com/ansible-community/molecule-docker#faq
I installed tf_trt_models on Jetson-nano following the instructions here. I am getting the following error
Installed /home/tarik-dev/.local/lib/python3.6/site-packages/slim-0.1-py3.6.egg
Processing dependencies for slim==0.1
Finished processing dependencies for slim==0.1
~/tf_trt_models
Installing tf_trt_models
/home/tarik-dev/tf_trt_models
running install
Checking .pth file support in /home/tarik-dev/.local/lib/python3.6/site-packages/
/home/tarik-dev/.virtualenvs/nanocv/bin/python -E -c pass
TEST FAILED: /home/tarik-dev/.local/lib/python3.6/site-packages/ does NOT support .pth files
bad install directory or PYTHONPATH
Found the solution. In the install script, because I am in virtualenv, I will need to remove --user
Here is the install.sh script
#!/bin/bash
INSTALL_PROTOC=$PWD/scripts/install_protoc.sh
MODELS_DIR=$PWD/third_party/models
PYTHON=python
if [ $# -eq 1 ]; then
PYTHON=$1
fi
echo $PYTHON
# install protoc
echo "Downloading protoc"
source $INSTALL_PROTOC
PROTOC=$PWD/data/protoc/bin/protoc
# install tensorflow models
git submodule update --init
pushd $MODELS_DIR/research
echo $PWD
echo "Installing object detection library"
echo $PROTOC
$PROTOC object_detection/protos/*.proto --python_out=.
$PYTHON setup.py install --user
popd
pushd $MODELS_DIR/research/slim
echo $PWD
echo "Installing slim library"
$PYTHON setup.py install --user
popd
echo "Installing tf_trt_models"
echo $PWD
$PYTHON setup.py install --user
Here are my steps in my
steps:
-run:
name: Setup nvm and npm
command: |
wget -qO- https://raw.githubusercontent.com/creationix/nvm/v0.33.8/install.sh | bash
export NVM_DIR=$HOME/.nvm
source $NVM_DIR/nvm.sh
nvm install 8.9 && nvm alias default 8.9
-run: npm install && npm run lint && npm test
The second step always fails with this error message
/bin/bash: npm: command not found
I checked .bashrc and I can see the following lines are added to the end of the file
export NVM_DIR="$HOME/.nvm"
[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh" # This loads nvm
[ -s "$NVM_DIR/bash_completion" ] && \. "$NVM_DIR/bash_completion" # This loads nvm bash_completion
Circleci 2.0 invokes the step command by starting a new shell with #!/bin/bash -eo pipefail
If I starts a docker (docker run -i -t buildpack-deps:xenial) and apply the first step, and then start a new shell via #!/bin/bash -eo pipefail, I can see npm is available on the path
I am using docker for this project
version: 2
jobs:
test_main:
docker:
- image: buildpack-deps:xenial
So why does it fail in circleci 2.0 environment? How can I ensure npm will be available to step 2 from step 1?
I have tried to add [ -s "$HOME/.bashrc" ] && \. "$HOME/.bashrc" to ~/.bash_profile (in case .bashrc is not executed due to the non-interactive/non-login shell)
To reproduce the issue you can run circleci build with this .circleci/config.yml file
version: 2
jobs:
build:
docker:
- image: buildpack-deps:xenial
steps:
- run:
name: Setup nvm and npm
command: |
wget -qO- https://raw.githubusercontent.com/creationix/nvm/v0.33.8/install.sh | bash
# Activate nvm
export NVM_DIR=$HOME/.nvm
touch $HOME/.nvmrc
source $NVM_DIR/nvm.sh
# Use node 8.9
nvm install 8.9 && nvm alias default 8.9
echo 8.9 > $HOME/.nvmrc
# Enable nvm in following steps
echo '[ -s "$HOME/.bashrc" ] && \. "$HOME/.bashrc"' >> $HOME/.bash_profile
# To fix npm install : "node-pre-gyp: Permission denied"
npm config set user 0
npm config set unsafe-perm true
npm install -g npx webpack webpack-cli jest
node --version
npm --version
- run: npm install
You will see the following error message:
====>> npm install
#!/bin/bash -eo pipefail
npm install
/bin/bash: npm: command not found
Error: Exited with code 127
Step failed
Task failed
The problem lies with these lines:
# Enable nvm in following steps
echo '[ -s "$HOME/.bashrc" ] && \. "$HOME/.bashrc"' >> $HOME/.bash_profile
I was hoping to source .bashrc from .bash_profile. However since the shell of circleci is non-interactive, the environment variable PS1 is blank. Hence .bashrc basically quits immediately once it is sourced, because of this line in .bashrc
# If not running interactively, don't do anything
[ -z "$PS1" ] && return
I have to put the following lines directly in the file specified by $BASH_ENV
echo 'export NVM_DIR=$HOME/.nvm' >> $BASH_ENV
echo 'source $NVM_DIR/nvm.sh' >> $BASH_ENV
I found that changing default node by nvm is not working for my steps.
Solved by:
- run:
name: 'Install Project Node'
command: |
set +x
source ~/.bashrc
nvm install 12
NODE_DIR=$(dirname $(which node))
echo "export PATH=$NODE_DIR:\$PATH" >> $BASH_ENV
Just source /opt/circleci/.nvm/nvm.sh in the beginning of every step.
I was trying to build Apache Impala from source(newest version on github).
I followed following instructions to build Impala:
(1) clone Impala
> git clone https://git-wip-us.apache.org/repos/asf/incubator-impala.git
> cd Impala
(2) configure environmental variables
> export JAVA_HOME=/usr/lib/jvm/java-7-oracle-amd64
> export IMPALA_HOME=<path to Impala>
> export BOOST_LIBRARYDIR=/usr/lib/x86_64-linux-gnu
> export LC_ALL="en_US.UTF-8"
(3)build
${IMPALA_HOME}/buildall.sh -noclean -skiptests -build_shared_libs -format
(4) errors are shown below:
Heap is needed to find the cause. Looks like the compiler does not support the GLIBCXX_3.4.21. But the GCC is automatically downloaded by the building script.
Appreciate your help!!!
Starting from this commit https://github.com/apache/impala/commit/d5cefe07c931a0d3bf02bca97bbba05400d91a48 , Impala has been shipped with a development bootstrap script.
I tried the master branch in a fresh ubuntu 16.04 docker image and it works fine. Here is what I did.
checkout the latest impala code base and do
docker run --rm -it --privileged -v /home/amos/git/impala/:/root/Impala ubuntu:16.04
inside docker, do
apt-get update
apt-get install sudo
cd /root/Impala
comment this out in bin/bootstrap_system.sh if you don't need test data
# if ! [[ -d ~/Impala-lzo ]]
# then
# git clone https://github.com/cloudera/impala-lzo.git ~/Impala-lzo
# fi
# if ! [[ -d ~/hadoop-lzo ]]
# then
# git clone https://github.com/cloudera/hadoop-lzo.git ~/hadoop-lzo
# fi
# cd ~/hadoop-lzo/
# time -p ant package
also add this line before ssh localhost whoami
echo "source ${IMPALA_HOME}/bin/impala-config-local.sh" >> ~/.bashrc
change the build command to whatever you like in bin/bootstrap_development.sh
${IMPALA_HOME}/buildall.sh -noclean -skiptests -build_shared_libs -format
then run bin/bootstrap_development.sh
You'll be prompted for some input. Just fill in default value and it'll work.
I'm currently compiling the latest version of Mono from Github on an original version Raspberry Pi, on latest Raspbian.
This is a very time consuming process, which when it's complete I would not like to have to repeat.
Can the compiled Mono installation be packaged into a .deb to, for example, allow me to re-install latest Raspbian, then dpkg -i my-mono-build.deb?
Sure, and it's very easy to do if you choose the proper tool so that you don't need a master on debian packaging. As for me, I chose fpm to do exactly this. (Note: install via gem, not apt-get.)
And here you have an example of a script of how to build a Mono .deb with this, which I copy+paste here for posterity (just in case I delete the github repo by mistake, or github stops being a thing in the future):
#!/bin/bash
set -e
die () {
echo >&2 "$#"
exit 1
}
[ "$#" -eq 1 ] || die "Please specify the version of Mono you want to build as the argument. (Check the versions in the tarball list here: http://download.mono-project.com/sources/mono/)"
which fpm > /dev/null || (echo "Please install fpm (from gem, not apt-get)" && exit 1)
if mono --version > /dev/null 2>&1; then
echo "Mono is installed locally; please uninstall first" && exit 1
fi
WORK_DIR=/tmp/7digital-mono-work
rm -rf $WORK_DIR
mkdir $WORK_DIR
cd $WORK_DIR
MONO_VERSION=$1
MONO_DIR="mono-$MONO_VERSION"
SEVEND_VERSION="701"
MONO7D_VERSION=$MONO_VERSION'.'$SEVEND_VERSION
MONO7D_NAME="mono-7d"
echo "Downloading $MONO_VERSION"
wget http://download.mono-project.com/sources/mono/mono-$MONO_VERSION.tar.bz2
tar -jxf mono-$MONO_VERSION.tar.bz2
TARGET_DIR="$WORK_DIR/destdir"
mkdir $TARGET_DIR
cd "$WORK_DIR/$MONO_DIR"
./configure --prefix=/usr
make
make install DESTDIR="$TARGET_DIR"
cd $WORK_DIR
fpm -s dir \
-t deb \
-n $MONO7D_NAME \
-v $MONO7D_VERSION \
-C $TARGET_DIR \
-d "libglib2.0-dev (>= 0)" \
usr/bin usr/lib usr/share usr/include usr/etc
echo "Done. Your package should be ready in $WORK_DIR"