how to use npm in ansible - npm

This is my main.yml file in task:
- name: Use npm
shell: >
/bin/bash -c "source $HOME/.nvm/nvm.sh && nvm use 16.16.0"
become: yes
become_user: root
- name: Run build-dev
shell: |
cd /home/ec2-user/ofiii
npm install
npm run build-dev
become: yes
become_user: root
when: platform == "dev"
And the output when running the script:
fatal: [172.31.200.13]: FAILED! => {
"changed": true,
"cmd": "cd /home/ec2-user/ofiii\nnpm install\nnpm run build-stag\n",
"delta": "0:00:00.061363",
"end": "2022-11-09 09:45:17.917829",
"msg": "non-zero return code",
"rc": 127,
"start": "2022-11-09 09:45:17.856466",
"stderr": "/bin/sh: line 1: npm:命令找不到\n/bin/sh: line 2: npm:命令找不到",
"stderr_lines": ["/bin/sh: line 1: npm:命令找不到", "/bin/sh: line 2: npm:命令找不到"],
"stdout": "",
"stdout_lines": []
}
the error is "npm:command not found" but I am really sure about the installation and the path to be set appropriatelly in the machine, the thing that I doubting is the script
I don't know how to modify my script,I tried to use npm module, but I failed

The problem is that each task environment is separate and you are setting nvm environment in a separate task.
The "Run build-dev" knows nothing about the paths set up from "Use npm"
I'd suggest combining these two tasks, with a few additional changes explained below:
- name: Run build-dev
shell: |
source $HOME/.nvm/nvm.sh
nvm use 16.16.0
npm install
npm run build-dev
args:
executable: /bin/bash
chdir: /home/ec2-user/ofiii
become: yes
become_user: root
when: platform == "dev"
Additional changes:
Using bash -c "..." in a shell module would result /bin/sh -c "/bin/bash -c '...'", it's better to use executable: /bin/bash instead
Shell module has chdir argument to specify the directory the script to run at
Check shell module documentation for other arguments and examples.

Related

Why Molecule is not able to start a docker container (Failed to create temporary directory)

I found similar case here, that I am using molecule to test my ansible roles, but for some reason it is skipping "creation" part and gives error like:
fatal: [rabbitmq]: UNREACHABLE! => {"changed": false, "msg": "Failed to create temporary directory.In some cases, you may have been able to authenticate and did not have permissions on the target directory. Consider changing the remote tmp path in ansible.cfg to a path rooted in \"/tmp\", for more error information use -vvv. Failed command was: ( umask 77 && mkdir -p \"` echo ~/.ansible/tmp `\"&& mkdir \"` echo ~/.ansible/tmp/ansible-tmp-1638541586.6239848-828-250053975102429 `\" && echo ansible-tmp-1638541586.6239848-828-250053975102429=\"` echo ~/.ansible/tmp/ansible-tmp-1638541586.6239848-828-250053975102429 `\" ), exited with result 1", "unreachable": true}
It is skipping the create process: Skipping, instances already created. However, nothing is running:
name#EEW00438:~/.cache$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
name#EEW00438:~/.cache$
what I tried:
molecule destroy
molecule reset
restart
rm -rf ~/.cache/
changed remote_tmp to /tmp/.ansible/ in /etc/ansible/ansible.cfg
reinstall molecule
This issue is only with one role.
UPDATE:
it is failing on step:
mkdir \"` echo ~/.ansible/tmp/ansible-tmp-1638782939.31706-2913-12516475286623 `\" && echo ansible-tmp-1638782939.31706-2913-12516475286623=
mkdir: cannot create directory ‘"/home/user/.ansible/tmp/ansible-tmp-1638782939.31706-2913-12516475286623"’: No such file or directory
I stumbled upon this issue as well.
When you create the role you need to create it as molecule init role --driver-name docker ns.myrole to enable docker. Be sure to install the docker driver too if you haven't pip install --upgrade molecule-docker
So if you need to tweak the container that runs, edit molecule.yml. It defaults to centos. I switched to ubuntu in there, an created a Dockerfile to provision the container with things that need to exist.
molecule.yml
---
dependency:
name: galaxy
driver:
name: docker
platforms:
- name: instance
image: ubuntu:22.04 # this is required but ignored since I specify a `dockerfile`
pre_build_image: false
dockerfile: Dockerfile
provisioner:
name: ansible
verifier:
name: ansible
For example, Ubuntu 22.04 doesn't use python anymore, so I added an alias at the end of what molecule renders so that Ansible can use python and have it redirect to python3
FROM ubuntu:22.04
RUN if [ $(command -v apt-get) ]; then export DEBIAN_FRONTEND=noninteractive && apt-get update && apt-get install -y python3 sudo bash ca-certificates iproute2 python3-apt aptitude && apt-get clean && rm -rf /var/lib/apt/lists/*; \
elif [ $(command -v dnf) ]; then dnf makecache && dnf --assumeyes install /usr/bin/python3 /usr/bin/python3-config /usr/bin/dnf-3 sudo bash iproute && dnf clean all; \
elif [ $(command -v yum) ]; then yum makecache fast && yum install -y /usr/bin/python /usr/bin/python2-config sudo yum-plugin-ovl bash iproute && sed -i 's/plugins=0/plugins=1/g' /etc/yum.conf && yum clean all; \
elif [ $(command -v zypper) ]; then zypper refresh && zypper install -y python3 sudo bash iproute2 && zypper clean -a; \
elif [ $(command -v apk) ]; then apk update && apk add --no-cache python3 sudo bash ca-certificates; \
elif [ $(command -v xbps-install) ]; then xbps-install -Syu && xbps-install -y python3 sudo bash ca-certificates iproute2 && xbps-remove -O; fi
RUN echo 'alias python=python3' >> ~/.bashrc
It's been years since I last used Molecule, and I must say... it's gone downhill. It used to be easy/clear/direct to get things working. Sigh. I guess I should stick to containers and force the migration off VMs sooner!
The problem may be caused by a Docker context change performed at the start of Docker Desktop. Despite this, Molecule does create a container, but in an inactive context.
At startup, Docker Desktop automatically switches the context from default to desktop-linux [1]. The active context determines which containers are available from CLI.
The context cannot be set in the molecule, i.e. the default context is always used to create containers [2].
$ molecule create --scenario-name test
... # The output with the error is skipped because it duplicates the output from the question
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
$ docker context ls
NAME TYPE DESCRIPTION DOCKER ENDPOINT KUBERNETES ENDPOINT ORCHESTRATOR
default moby Current DOCKER_HOST based configuration unix:///var/run/docker.sock swarm
desktop-linux * moby unix:///home/bkarpov/.docker/desktop/docker.sock
$ docker context use default
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a71bfd28992f geerlingguy/docker-ubuntu2004-ansible "bash -c 'while true…" 5 minutes ago Up 5 minutes some-instance
$ molecule login --scenario-name test
INFO Running test > login
root#some-instance:/#
Solutions
Switch the context back to default manually
docker context use default
This solution is suitable for one-time execution, since the context will need to be switched every time Docker Desktop is started. Docker Desktop service will continue to work using the desktop-linux context.
Issue with the request to add context switching to Docker Desktop - https://github.com/docker/roadmap/issues/47
Stop Docker Desktop
systemctl --user stop docker-desktop
Stopping the Docker Desktop service will automatically switch to the default context.
Set DOCKER_CONTEXT so that Docker Desktop does not change the context in the current shell
export DOCKER_CONTEXT=default
systemctl --user restart docker-desktop
When stopping, the context returns to default, and when starting, it does not switch to desktop-linux.
References
https://docs.docker.com/desktop/install/ubuntu/#launch-docker-desktop
https://github.com/ansible-community/molecule-docker#faq

Ansible to Open Terminal and Run Program on Remote Ubuntu Machine

I'm using Ansible to setup an instance of Ubuntu 18.04 (remote) and run certain programs within the user environment. I have a command I'd like to execute inside a terminal on the remote that requires the terminal stay open.
If I'm on Ubuntu and run the following command I get exactly what I expect.
# DISPLAY=:0 nohup gnome-terminal -- roscore
Use the current display for the user
nohup so the terminal won't close if the parent terminal closes
start a new gnome-terminal instance
-- = run a command inside the new gnome-terminal instance
roscore can be replaced by any command that requires an open stream to a terminal window
My Ansible task looks like this when trying to recreate the same command
- name: Start terminal on remote machine
shell:
args:
cmd: DISPLAY=:0 nohup gnome-terminal -- roscore
executable: /bin/bash
When running this command I get the following verbose output
changed: [] => {
"changed": true,
"cmd": "DISPLAY=:0 nohup gnome-terminal -- roscore",
"delta": "0:00:00.243119",
"end": "",
"invocation": {
"module_args": {
"_raw_params": "DISPLAY=:0 nohup gnome-terminal -- roscore",
"_uses_shell": true,
"argv": null,
"chdir": null,
"creates": null,
"executable": "/bin/bash",
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true,
"warn": true
}
},
"rc": 0,
"start": "",
"stderr": "nohup: ignoring input",
"stderr_lines": [
"nohup: ignoring input"
],
"stdout": "",
"stdout_lines": []
}
When I execute this it appears that the terminal is opened for just a moment on the remote machine but it does not stay open. What is Ansible doing that would close the remote terminal session after running the command?
What I want is an Ansible task that will allow a terminal window to open on a remote Ubuntu 18.04 machine. Stretch goal would be to get the command running in the now open terminal.
Any help would be appreciated and glad to clarify where needed. Thank you!
I've decided to go a different direction but wanted to post what I learned.
To execute a command from Ansible that will open a terminal window on the Ubuntu 18.04 (remote) machine will require the following command:
- name: Start terminal on remote machine
shell:
args:
cmd: DISPLAY=:0 nohup gnome-terminal </dev/null >/dev/null 2>&1 &
executable: /bin/bash
Notice the </dev/null >/dev/null 2>&1 &. This is necessary for Ansible to be able to disown the process while allowing the terminal to remain open on the remote machine.
In Theory, I haven't proven this but to run a command inside the terminal would require an extra gnome-terminal argument -e.
-e, --command=STRING
Execute the argument to this option inside the terminal.
Example
- name: Start terminal on remote machine
shell:
args:
cmd: DISPLAY=:0 nohup gnome-terminal -e "bash -c 'whoami'" </dev/null >/dev/null 2>&1 &
executable: /bin/bash

ansible installing npm using nvm but returning npm command not found on npm install

I am trying to install npm with nvm using ansible playbook script on Ubuntu 18.04.2 LTS. It is getting installed but on running npm install command it returning an error ["/bin/bash: npm: command not found"]
this is the script
- name: Create destination dir if it does not exist
file:
mode: 0775
path: "/usr/local/nvm"
state: directory
when: "nvm_dir != ''"
- name: Install NVM
shell: "curl https://raw.githubusercontent.com/creationix/nvm/v0.33.11/install.sh | NVM_SOURCE="" NVM_DIR=/usr/local/nvm PROFILE=/root/.bashrc bash"
args:
warn: false
register: nvm_result
This is the repository where I get the code (https://github.com/morgangraphics/ansible-role-nvm)
By default shell module uses /bin/sh unless the executable has been explicitly defined in the module using args/keyword.
Seems like /bin/bash(a variation of shell is not is installed on the host) thereby giving error. Script needs bin/bash.
bin/bash is mostly installed on all the operating systems. May be some path issue.
Also updated the code below with condition.
---
- hosts: localhost
tasks:
- name: Create destination dir if it does not exist
file:
mode: 0775
path: "/usr/local/nvm"
state: directory
when: "nvm_dir is not defined"
- name: Install NVM
shell: 'curl https://raw.githubusercontent.com/creationix/nvm/v0.33.11/install.sh | NVM_SOURCE="" NVM_DIR=/usr/local/nvmPROFILE=/root/.bashrc bash'
args:
warn: false
register: nvm_result

How to enable nvm in steps in circleci 2.0?

Here are my steps in my
steps:
-run:
name: Setup nvm and npm
command: |
wget -qO- https://raw.githubusercontent.com/creationix/nvm/v0.33.8/install.sh | bash
export NVM_DIR=$HOME/.nvm
source $NVM_DIR/nvm.sh
nvm install 8.9 && nvm alias default 8.9
-run: npm install && npm run lint && npm test
The second step always fails with this error message
/bin/bash: npm: command not found
I checked .bashrc and I can see the following lines are added to the end of the file
export NVM_DIR="$HOME/.nvm"
[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh" # This loads nvm
[ -s "$NVM_DIR/bash_completion" ] && \. "$NVM_DIR/bash_completion" # This loads nvm bash_completion
Circleci 2.0 invokes the step command by starting a new shell with #!/bin/bash -eo pipefail
If I starts a docker (docker run -i -t buildpack-deps:xenial) and apply the first step, and then start a new shell via #!/bin/bash -eo pipefail, I can see npm is available on the path
I am using docker for this project
version: 2
jobs:
test_main:
docker:
- image: buildpack-deps:xenial
So why does it fail in circleci 2.0 environment? How can I ensure npm will be available to step 2 from step 1?
I have tried to add [ -s "$HOME/.bashrc" ] && \. "$HOME/.bashrc" to ~/.bash_profile (in case .bashrc is not executed due to the non-interactive/non-login shell)
To reproduce the issue you can run circleci build with this .circleci/config.yml file
version: 2
jobs:
build:
docker:
- image: buildpack-deps:xenial
steps:
- run:
name: Setup nvm and npm
command: |
wget -qO- https://raw.githubusercontent.com/creationix/nvm/v0.33.8/install.sh | bash
# Activate nvm
export NVM_DIR=$HOME/.nvm
touch $HOME/.nvmrc
source $NVM_DIR/nvm.sh
# Use node 8.9
nvm install 8.9 && nvm alias default 8.9
echo 8.9 > $HOME/.nvmrc
# Enable nvm in following steps
echo '[ -s "$HOME/.bashrc" ] && \. "$HOME/.bashrc"' >> $HOME/.bash_profile
# To fix npm install : "node-pre-gyp: Permission denied"
npm config set user 0
npm config set unsafe-perm true
npm install -g npx webpack webpack-cli jest
node --version
npm --version
- run: npm install
You will see the following error message:
====>> npm install
#!/bin/bash -eo pipefail
npm install
/bin/bash: npm: command not found
Error: Exited with code 127
Step failed
Task failed
The problem lies with these lines:
# Enable nvm in following steps
echo '[ -s "$HOME/.bashrc" ] && \. "$HOME/.bashrc"' >> $HOME/.bash_profile
I was hoping to source .bashrc from .bash_profile. However since the shell of circleci is non-interactive, the environment variable PS1 is blank. Hence .bashrc basically quits immediately once it is sourced, because of this line in .bashrc
# If not running interactively, don't do anything
[ -z "$PS1" ] && return
I have to put the following lines directly in the file specified by $BASH_ENV
echo 'export NVM_DIR=$HOME/.nvm' >> $BASH_ENV
echo 'source $NVM_DIR/nvm.sh' >> $BASH_ENV
I found that changing default node by nvm is not working for my steps.
Solved by:
- run:
name: 'Install Project Node'
command: |
set +x
source ~/.bashrc
nvm install 12
NODE_DIR=$(dirname $(which node))
echo "export PATH=$NODE_DIR:\$PATH" >> $BASH_ENV
Just source /opt/circleci/.nvm/nvm.sh in the beginning of every step.

When running `bin/behat` I get a "Class not found" error

When I run bin/behat I get this error.
PHP Fatal error: Class 'Symfony\Component\Console\Application' not found in
/vendor/behat/behat/src/Behat/Behat/Console/BehatApplication.php on line 31
My composer.json file contains this:
{
"require": {
"drupal/drupal-extension": "",
"behat/behat": "2.4.#stable",
"behat/mink": "1.4#stable",
"behat/mink-goutte-driver": "",
"behat/mink-selenium-driver": "",
"behat/mink-selenium2-driver": "",
"behat/mink-sahi-driver": "",
"behat/mink-zombie-driver": "",
"behat/mink-extension": ""
},
"minimum-stability": "dev",
"config": {
"bin-dir": "bin/"
}
}
And for some reason symfony/CssSelector is failing to clone:
[RuntimeException]
Failed to clone via git, https and http protocols, aborting.
git://github.com/symfony/CssSelector.git
fatal: No such remote 'composer'
https://github.com/symfony/CssSelector.git
fatal: No such remote 'composer'
http://github.com/symfony/CssSelector.git
fatal: No such remote 'composer'
I suggest removing bin, vendor folders and composer.lock file and running php composer.phar install again, the packages should install fine then.
Composer is a only a Method for installing Behat you can remove Behat and try with one of following Method :
The simplest way to install Behat is through Composer.
Method #1 (Composer)
Create composer.json file in the project root:
{
"require": {
"behat/behat": "2.4.*#stable"
},
"minimum-stability": "dev",
"config": {
"bin-dir": "bin/"
}
}
Then download composer.phar and run install command:
$ curl http://getcomposer.org/installer | php
$ php composer.phar install
Composer uses GitHub zipball service by default and this service is known for outages from time to time. If you get
The ... file could not be downloaded (HTTP/1.1 502 Bad Gateway)
during installation, just use --prefer-source option:
$ php composer.phar install --prefer-source
After that, you will be able to run Behat with:
$ bin/behat
Method #2 (PHAR)
Also, you can use behat phar package:
$ wget https://github.com/downloads/Behat/Behat/behat.phar
Now you can execute Behat by simply running phar archive through php:
$ php behat.phar
Method #3 (Git)
You can also clone the project with Git by running:
$ git clone git://github.com/Behat/Behat.git && cd Behat
$ git submodule update --init
Then download composer.phar and run install command:
$ wget -nc http://getcomposer.org/composer.phar
$ php composer.phar install
After that, you will be able to run Behat with:
$ bin/behat
try with loading --config option of behat.yml
bin/behat -v --config=app/config/behat.yml
Also not sure if you are running symfony or drupal instance.
See detail configuration of my behat+mink+selenium+symfony2.8 installation here