IntelliJ pre-commit.com integration - intellij-idea

I'm using pre-commit hooks in my project.
When I'm doing my commits from the command-line everything is great and the hooks are working but when I try to commit from the IDE it failed with the message:
0 file committed, 2 files failed to commit: dummy commit pre-commit not found. Did you forget to activate your virtualenv?
My virtualenv is active:
What am I missing?
##Edit 1
Ubuntu 20.04.4 LTS
grep ^INSTALL .git/hooks/pre-commit -> INSTALL_PYTHON=/home/lioriz/anaconda3/envs/py36/bin/python
which pre-commit -> /home/lioriz/anaconda3/envs/py36/bin/pre-commit
head -1 $(which pre-commit) -> #!/home/lioriz/anaconda3/envs/py36/bin/python
pre-commit --version -> pre-commit 2.17.0
##Edit 2
.pre-commit-config.yaml:
# See https://pre-commit.com for more information
# See https://pre-commit.com/hooks.html for more hooks
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v3.2.0
hooks:
- id: trailing-whitespace
- id: end-of-file-fixer
- id: check-yaml
- id: check-added-large-files
.git/hooks/pre-commit:
#!/usr/bin/env bash
# File generated by pre-commit: https://pre-commit.com
# ID: 138fd403232d2ddd5efb44317e38bf03
# start templated
INSTALL_PYTHON=/home/lioriz/anaconda3/envs/py36/bin/python
ARGS=(hook-impl --config=.pre-commit-config.yaml --hook-type=pre-commit)
# end templated
HERE="$(cd "$(dirname "$0")" && pwd)"
ARGS+=(--hook-dir "$HERE" -- "$#")
if [ -x "$INSTALL_PYTHON" ]; then
exec "$INSTALL_PYTHON" -mpre_commit "${ARGS[#]}"
elif command -v pre-commit > /dev/null; then
exec pre-commit "${ARGS[#]}"
else
echo '`pre-commit` not found. Did you forget to activate your virtualenv?' 1>&2
exit 1
fi
test -x /home/lioriz/anaconda3/envs/py36/bin/python; echo $? -> 0
##Edit 3
The IntelliJ runs on Windows 11 with WSL2, and the pre-commit is installed in the wsl2 - Ubuntu 20.04.4 LTS

Related

ansible installing npm using nvm but returning npm command not found on npm install

I am trying to install npm with nvm using ansible playbook script on Ubuntu 18.04.2 LTS. It is getting installed but on running npm install command it returning an error ["/bin/bash: npm: command not found"]
this is the script
- name: Create destination dir if it does not exist
file:
mode: 0775
path: "/usr/local/nvm"
state: directory
when: "nvm_dir != ''"
- name: Install NVM
shell: "curl https://raw.githubusercontent.com/creationix/nvm/v0.33.11/install.sh | NVM_SOURCE="" NVM_DIR=/usr/local/nvm PROFILE=/root/.bashrc bash"
args:
warn: false
register: nvm_result
This is the repository where I get the code (https://github.com/morgangraphics/ansible-role-nvm)
By default shell module uses /bin/sh unless the executable has been explicitly defined in the module using args/keyword.
Seems like /bin/bash(a variation of shell is not is installed on the host) thereby giving error. Script needs bin/bash.
bin/bash is mostly installed on all the operating systems. May be some path issue.
Also updated the code below with condition.
---
- hosts: localhost
tasks:
- name: Create destination dir if it does not exist
file:
mode: 0775
path: "/usr/local/nvm"
state: directory
when: "nvm_dir is not defined"
- name: Install NVM
shell: 'curl https://raw.githubusercontent.com/creationix/nvm/v0.33.11/install.sh | NVM_SOURCE="" NVM_DIR=/usr/local/nvmPROFILE=/root/.bashrc bash'
args:
warn: false
register: nvm_result

AWS-S3 orb - Circle CI - Unexpected argument(s): arguments

I'm getting the following error in my build:
#!/bin/sh -eo pipefail
# Error calling workflow: 'build-deploy'
# Error calling job: 'build_test_es'
# Error calling command: 'aws-s3/sync'
# Unexpected argument(s): arguments
#
# -------
# Warning: This configuration was auto-generated to show you the message above.
# Don't rerun this job. Rerunning will have no effect.
false
Exited with code 1
This is how my config.yml file looks, I've supressed some parts.
version: 2.1
orbs:
aws-s3: circleci/aws-s3#1.0.0
jobs:
build_test_es:
docker:
- image: circleci/node:10.15
steps:
- checkout
- setup_remote_docker
- run:
name: NPM install
command: |
cd app
pwd
npm install
- run:
name: NPM build
command: |
cd app
pwd
npm run build
- run: mkdir bucket && echo "lorum ipsum" > bucket/build_asset.txt
- aws-s3/sync:
from: bucket
to: 's3://my-s3-bucket-name/prefix'
arguments: |
--acl public-read \
--cache-control "max-age=86400"
overwrite: true
As you can see I'm using the default command from the docs:
https://circleci.com/orbs/registry/orb/circleci/aws-s3#commands-sync
Is the orb broken? Have I misspleded something?
Fixed it by updating the orb. Nice way to waste time.
version: 2.1
orbs:
aws-s3: circleci/aws-s3#1.0.3

gitlab-ci job not running script

I am new to gitlab-ci and trying a minimal Python application based on a gitlab template.
My .gitlab-ci.yml file is below:
# This file is a template, and might need editing before it works on your project.
# Official language image. Look for the different tagged releases at:
# https://hub.docker.com/r/library/python/tags/
#image: python:latest
# Change pip's cache directory to be inside the project directory since we can
# only cache local items.
variables:
PIP_CACHE_DIR: "$CI_PROJECT_DIR/.cache"
stages:
- test
- run
# Pip's cache doesn't store the python packages
# https://pip.pypa.io/en/stable/reference/pip_install/#caching
#
# If you want to also cache the installed packages, you have to install
# them in a virtualenv and cache it as well.
#cache:
# paths:
# - .cache/pip
# - venv/
before_script:
- python -V # Print out python version for debugging
#- pip install virtualenv
#- virtualenv venv
- python -m venv venv
- venv/scripts/activate
job1:
stage: test
script:
- python setup.py test
#- pip install tox flake8 # you can also use tox
#- tox -e py36,flake8
job2:
stage: run
script:
- pip install wheel
- python setup.py bdist_wheel
artifacts:
paths:
- dist/*.whl
#pages:
# script:
# - pip install sphinx sphinx-rtd-theme
# - cd doc ; make html
# - mv build/html/ ../public/
# artifacts:
# paths:
# - public
# only:
# - master
The jobs are seen within the gitlab web UI and they appear to run on my (windows based shell executor) runner.
When I look at the output for the jobs, it appears as if the actual script commands for each job aren't running at all.
Here's the output from job1:
Running with gitlab-runner 11.2.0 (35e8515d)
on GKUHN-L04 b0162458
Using Shell executor...
Running on GKUHN-L04...
Fetching changes...
Removing venv/
HEAD is now at 2484105 And agai..
From https://gitlab.analog.com/GKuhn/test_gitlab_ci
- [deleted] (none) -> origin/test_ci
fdd4216..cd618ba master -> origin/master
Checking out cd618ba9 as master...
Skipping Git submodules setup
$ python -V
Python 3.7.0
$ python -m venv venv
$ venv/scripts/activate
Job succeeded
And job2:
Running with gitlab-runner 11.2.0 (35e8515d)
on GKUHN-L04 b0162458
Using Shell executor...
Running on GKUHN-L04...
Fetching changes...
Removing venv/
HEAD is now at cd618ba updated .gitlab-ci.yml file
Checking out cd618ba9 as master...
Skipping Git submodules setup
$ python -V
Python 3.7.0
$ python -m venv venv
$ venv/scripts/activate
Uploading artifacts...
WARNING: dist/*.whl: no matching files
ERROR: No files to upload
Job succeeded
What am I doing wrong??
Turns out this is bug on Windows with the shell executor:
https://gitlab.com/gitlab-org/gitlab-runner/issues/2730
And a duplicate of this question: Gitlab CI does not execute npm scripts

How to enable nvm in steps in circleci 2.0?

Here are my steps in my
steps:
-run:
name: Setup nvm and npm
command: |
wget -qO- https://raw.githubusercontent.com/creationix/nvm/v0.33.8/install.sh | bash
export NVM_DIR=$HOME/.nvm
source $NVM_DIR/nvm.sh
nvm install 8.9 && nvm alias default 8.9
-run: npm install && npm run lint && npm test
The second step always fails with this error message
/bin/bash: npm: command not found
I checked .bashrc and I can see the following lines are added to the end of the file
export NVM_DIR="$HOME/.nvm"
[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh" # This loads nvm
[ -s "$NVM_DIR/bash_completion" ] && \. "$NVM_DIR/bash_completion" # This loads nvm bash_completion
Circleci 2.0 invokes the step command by starting a new shell with #!/bin/bash -eo pipefail
If I starts a docker (docker run -i -t buildpack-deps:xenial) and apply the first step, and then start a new shell via #!/bin/bash -eo pipefail, I can see npm is available on the path
I am using docker for this project
version: 2
jobs:
test_main:
docker:
- image: buildpack-deps:xenial
So why does it fail in circleci 2.0 environment? How can I ensure npm will be available to step 2 from step 1?
I have tried to add [ -s "$HOME/.bashrc" ] && \. "$HOME/.bashrc" to ~/.bash_profile (in case .bashrc is not executed due to the non-interactive/non-login shell)
To reproduce the issue you can run circleci build with this .circleci/config.yml file
version: 2
jobs:
build:
docker:
- image: buildpack-deps:xenial
steps:
- run:
name: Setup nvm and npm
command: |
wget -qO- https://raw.githubusercontent.com/creationix/nvm/v0.33.8/install.sh | bash
# Activate nvm
export NVM_DIR=$HOME/.nvm
touch $HOME/.nvmrc
source $NVM_DIR/nvm.sh
# Use node 8.9
nvm install 8.9 && nvm alias default 8.9
echo 8.9 > $HOME/.nvmrc
# Enable nvm in following steps
echo '[ -s "$HOME/.bashrc" ] && \. "$HOME/.bashrc"' >> $HOME/.bash_profile
# To fix npm install : "node-pre-gyp: Permission denied"
npm config set user 0
npm config set unsafe-perm true
npm install -g npx webpack webpack-cli jest
node --version
npm --version
- run: npm install
You will see the following error message:
====>> npm install
#!/bin/bash -eo pipefail
npm install
/bin/bash: npm: command not found
Error: Exited with code 127
Step failed
Task failed
The problem lies with these lines:
# Enable nvm in following steps
echo '[ -s "$HOME/.bashrc" ] && \. "$HOME/.bashrc"' >> $HOME/.bash_profile
I was hoping to source .bashrc from .bash_profile. However since the shell of circleci is non-interactive, the environment variable PS1 is blank. Hence .bashrc basically quits immediately once it is sourced, because of this line in .bashrc
# If not running interactively, don't do anything
[ -z "$PS1" ] && return
I have to put the following lines directly in the file specified by $BASH_ENV
echo 'export NVM_DIR=$HOME/.nvm' >> $BASH_ENV
echo 'source $NVM_DIR/nvm.sh' >> $BASH_ENV
I found that changing default node by nvm is not working for my steps.
Solved by:
- run:
name: 'Install Project Node'
command: |
set +x
source ~/.bashrc
nvm install 12
NODE_DIR=$(dirname $(which node))
echo "export PATH=$NODE_DIR:\$PATH" >> $BASH_ENV
Just source /opt/circleci/.nvm/nvm.sh in the beginning of every step.

How to configure puppet so that it installs yum repos with debug mode?

When i run puppet apply, it tries to install packages using the following command:
/usr/bin/yum -d 0 -e 0 -y install couchdb-1.2.0-7.el6
How can i configure so that it runs it as following instead:
/usr/bin/yum -y install couchdb-1.2.0-7.el6
That is, without removing the debug logs?
You could create a module with an exec resource in it.
> exec {
>
> "couchdb":
> command => "/usr/bin/yum -y -d 0 install couchdb-1.2.0-7.el6",
> path => "/usr/local/bin/:/bin/",
>
> }
as a test I did an update to my wget. Before running the module wget was at 1.11.4-2.el5. In my repository I had 1.11.4-3.el5_8.1.
Here are the results of my 'yum update list wget.x86_64':
Installed Packages
wget.x86_64 1.11.4-2.el5 installed
Available Packages
wget.x86_64 1.11.4-3.el5_8.1 update
this is my puppet output after applying the class (with a debug option to show you the ouput):
debug: Executing '/usr/bin/yum -y -d 0 update wget.x86_64' notice:
/Stage[main]/Yum-update-test/Exec[wget]/returns: executed successfully
And this is the output of the 'yum update list wget.x86_64' after the class/module was applied:
Installed Packages
wget.x86_64 1.11.4-3.el5_8.1 installed
While waiting for a real fix thru this ticket:
https://tickets.puppetlabs.com/browse/PUP-3453
Your only solution is to modify directly the yum package provider:
/usr/lib/ruby/site_ruby/1.8/puppet/provider/package/yum.rb
def install
wanted = #resource[:name]
# If not allowing virtual packages, do a query to ensure a real package exists
unless #resource.allow_virtual?
yum *['-d', '0', '-e', '0', '-y', install_options, :list, wanted].compact
end
Change the '-d' value to 10 and you'll be done
If you provide yum the -d or -e options multiple times, it will use the most recent values. So, you can also use install_options on your package resources. For example:
package { 'wget':
install_options => ['-d' , '10' , '-e' , '1' , '-v'],
}
your puppet log will then include something like:
2017-10-19 14:02:48 +0000 Puppet (debug): Executing: '/usr/bin/yum -d 0 -e 0 -y -d 10 -e 1 -v install wget'
... and all of the debug output.