How to fetch configurations from multiple sub directories of git repo using spring cloud config..? - spring-cloud-config

How to fetch configurations from multiple sub directories of git repo using spring cloud config..?
I tried with below
spring:
application:
name: entity-explorer-ms
profiles:
active: ${spring.profiles.active}
cloud:
config:
server:
git:
searchPaths: commons ---> folder name present on git repo
# file name that are present on git :- commons-application, entity-explorer
label: ${spring.cloud.config.label}
# Taking values from vm arguments
config:
import: configserver:${spring.cloud.import}
I am expecting
i want to fetch values from files present inside commons folder, root folder and also from other folders.

Related

Why can't I find the Gitlab cache directory?

My Gitlab CI/CD job has a Maven-cache defined which does not seem to work. The /cache directory as well as the path directory are empty.
This is my .gitlab-ci-yaml:
variables:
MAVEN_OPTS: "-Dhttps.protocols=TLSv1.2 -Dmaven.repo.local=$CI_PROJECT_DIR/.m2/repository -Dorg.slf4j.simpleLogger.log.org.apache.maven.cli.transfer.Slf4jMavenTransferListener=WARN -Dorg.slf4j.simpleLogger.showDateTime=true -Djava.awt.headless=true"
MAVEN_CLI_OPTS: "--batch-mode --errors --fail-at-end --show-version -DinstallAtEnd=true -DdeployAtEnd=true"
.cache-m2: &cache-m2
cache:
- key: "mavenRepo"
paths:
- .m2/repository
test_backend:
stage: test
image: maven:3-openjdk-11
tags:
- docker
<<: *cache-m2
script:
- ls -al .m2/repository || true
- mvn help:evaluate -Dexpression=settings.localRepository -q -DforceStdout
- mvn $MAVEN_CLI_OPTS clean test
This is the job log:
Running with gitlab-runner 13.12.0 (7a6612da)
on gitlab-runner-gitlab-runner-54b9d6b99 Ejwdvsgc
Resolving secrets 00:00
Preparing the "kubernetes" executor 00:00
Using Kubernetes namespace: gitlab-runner
Using Kubernetes executor with image maven:3-openjdk-11 ...
Preparing environment 00:03
WARNING: Pulling GitLab Runner helper image from Docker Hub. Helper image is migrating to registry.gitlab.com, for more information see https://docs.gitlab.com/runner/configuration/advanced-configuration.html#migrate-helper-image-to-registrygitlabcom
Waiting for pod gitlab-runner/runner-ejwdvsgc-project-663-concurrent-2chxfp to be running, status is Pending
Running on runner-ejwdvsgc-project-663-concurrent-2chxfp via gitlab-runner-gitlab-runner-54b9d6b99-w8dbf...
Getting source from Git repository 00:02
Fetching changes with git depth set to 50...
Initialized empty Git repository in /builds/Ejwdvsgc/2/t1/mvp/.git/
Created fresh repository.
Checking out 07e45667 as gitlab-test...
Skipping Git submodules setup
Restoring cache 00:01
No URL provided, cache will not be downloaded from shared cache server. Instead a local version of cache will be extracted.
Checking cache for mavenRepo-4...
Successfully extracted cache
Executing "step_script" stage of the job script
$ ls -al .m2/repository || true
ls: cannot access '.m2/repository': No such file or directory
$ mvn help:evaluate -Dexpression=settings.localRepository -q -DforceStdout
/builds/Ejwdvsgc/2/t1/mvp/.m2/repository
...
...
Saving cache for successful job 00:05
Creating cache mavenRepo-4...
.m2/repository: found 4572 matching files and directories
No URL provided, cache will be not uploaded to shared cache server. Cache will be stored only locally.
Created cache
It looks like a cache was found (Checking cache for mavenRepo-4) and it was extracted (Successfully extracted cache)
but the directories do not exists.
What am I doing wrong?

Verdaccio User Registration Disabled

I am trying to deploy Verdaccio to my kubernetes cluster to use as shared registry for my other components. My Problem is, that I cannot seem to get it to properly let my authenticate. I use the most recent Helm-Chart for deployment.
The configuration allows no registrations and expects authentication.
# This is the config file used for the docker images.
# It allows all users to do anything, so don't use it on production systems.
#
# Do not configure host and port under `listen` in this file
# as it will be ignored when using docker.
# see https://github.com/verdaccio/verdaccio/blob/master/docs/docker.md#docker-and-custom-port-configuration
#
# Look here for more config file examples:
# https://github.com/verdaccio/verdaccio/tree/master/conf
#
# path to a directory with all packages
storage: /verdaccio/storage/data
web:
# WebUI is enabled as default, if you want disable it, just uncomment this line
#enable: false
title: DiPlom NPM Registry - Verdaccio
auth:
htpasswd:
file: /verdaccio/storage/htpasswd
# Maximum amount of users allowed to register, defaults to +infinity.
# You can set this to -1 to disable registration.
max_users: -1
# a list of other known repositories we can talk to
uplinks:
npmjs:
url: https://registry.npmjs.org/
packages:
'#*/*':
# scoped packages
access: $authenticated
publish: $authenticated
# proxy: npmjs
'**':
# allow all users (including non-authenticated users) to read and
# publish all packages
#
# you can specify usernames/groupnames (depending on your auth plugin)
# and three keywords: $all, $anonymous, $authenticated
access: $authenticated
# allow all known users to publish packages
# (anyone can register by default, remember?)
publish: $authenticated
# if package is not available locally, proxy requests to 'npmjs' registry
# proxy: npmjs
# To use `npm audit` uncomment the following section
middlewares:
audit:
enabled: true
# log settings
logs:
- {type: stdout, format: pretty, level: http}
#- {type: file, path: verdaccio.log, level: info}
I need to get user-credentials to obtain the corresponding Token so I can push and pull from the registry. However, so far nothing I entered into the HTPasswd file worked. I've looked through the issues on GitHub and a few tips how to generate the HTPasswd entry, but so far no success. When I try to login via npm login, I get the following response:
npm ERR! code E409
npm ERR! 409 Conflict - PUT http://<IngressURL>/-/user/org.couchdb.user:admin - user registration disabled
npm ERR! A complete log of this run can be found in:
npm ERR! /Users/<USR>/.npm/_logs/2021-08-13T10_23_30_382Z-debug.log
I then tried to configure the server to allow registrations by changing max_users to 1. When running npm adduser I get the very same response. How am I supposed to use Verdaccio if there is no way to authenticate with the service? Or am I missing something?
Thanks.
Update:
I've played around with various deployments (docker, compose, Minikube) and figured out, that the ingress, that is created by the helm chart, seems to be the problem here. Not sure what exactly is not working, but as soon as I manually expose the pod via node-port I can properly create a user.

docker volume, configuration files aren't generated

Same kind of issue than : what causes a docker volume to be populated?
I'm trying to share configuration file of apache in /etc/apache2 with my host, and file aren't generated automatically within the shared folder.
As minimal example:
Dockerfile
FROM debian:9
RUN apt update
#Install apache
RUN apt install -y apache2 apache2-dev
ENTRYPOINT ["apache2ctl", "-D", "FOREGROUND"]
docker-compose.yml
version: '2.2'
services:
apache:
container_name: apache-server
volumes:
- ./log/:/var/log/apache2
- ./config:/etc/apache2/ #remove it will let log generating files
image: httpd-perso2
build: .
ports:
- "80:80"
With this configuration, nor ./config nor ./log will be filled with files/folders generated by the container, even if log files should have some error (getting apache-server | The Apache error log may have more information.)
If I remove the ./config volume, apache log files will be generated properly. Any clue for which reason this can append ? How can I share apache config file ?
Having the same issue with django settings file, seem to be related to config file generated by an application.
What I tried :
- using VOLUME in Dockerfile
- running docker-compose as root or chmod 777 on folders
- Creating file within the container to those directory to see if they are created on the host (and they did)
- On host, creating shared folder chown by the user (chown by root if they are automatically generated)
- Trying with docker run, having exactly the same issue.
For specs:
- Docker version 19.03.5
- using a VPS with debian buster
- docker-compose version 1.25.3
Thanks for helping.

Downloading artifacts from coordinator forbidden

I am building a jar app with the gitlabci and after the build, the jar is sent to the next task with artifact.
Mavenbuild:artifact:
stage: mavenbuild
image:
name: maven:3.6.0-jdk-8
tags:
- docker
script:
- mvn clean install -pl batch-o365 -am -q
artifacts:
paths:
- batch-o365/app
Dockerbuild:ok:
stage: dockerbuild
image:
name: ekino/docker-buildbox:latest-dind-aws
dependencies:
- Mavenbuild:artifact
tags:
- docker
script:
- docker build .
The artifact is well uploaded :
Uploading artifacts...
batch-o365/app: found 3 matching files
Uploading artifacts to coordinator... ok id=11969 responseStatus=201 Created token=xxx
But when I tied to retrive it in the next task I have this error :
Downloading artifacts for Mavenbuild:artifact (11969)...
ERROR: Downloading artifacts from coordinator... forbidden id=11969 responseStatus=403 Forbidden status=403 Forbidden token=xxx
FATAL: permission denied
ERROR: Job failed: exit code 1
I already use artifacts on another projet from this gitlab server and it's working well.
Is someone here already has this issue with artifacts ?
I found the solution.
We are using internal proxies and I forgot to exclude the gitlab URL.
With this modification :
Dockerbuild:ok:
stage: dockerbuild
image:
name: ekino/docker-buildbox:latest-dind-aws
variables:
HTTP_PROXY: http://proxy:8000
HTTPS_PROXY: http://proxy:8000
NO_PROXY: 169.254.169.254,gitlab.xxx.com
Artifact is well retrived by the job.
Downloading artifacts for Mavenbuild:artifact (11989)...
Downloading artifacts from coordinator... ok id=11989 responseStatus=200 OK token=--xxx

Ansible not picking up custom module

I'm having issues with Ansible picking up a module that I've added.
The module is called 'passwordstore' https://github.com/morphje/ansible_pass_lookup/.
I'm using Ansible 2.2
In my playbook, I've added a 'library' folder and have added the contents of that GitHub directory to that folder. I've also tried uncommenting library = /usr/share/ansible/modules and adding the module files there and still doesn't get picked up.
Have also tried setting environment variable to ANSIBLE_LIBRARY=/usr/share/ansible/modules
My Ansible playbook looks like this:
---
- name: example play
hosts: all
gather_facts: false
tasks:
- name: set password
debug: msg="{{ lookup('passwordstore', 'files/test create=true')}}"
And when I run this I get this error;
ansible-playbook main.yml
PLAY [example play] ******************************************************
TASK [set password] ************************************************************
fatal: [backend.example.name]: FAILED! => {"failed": true, "msg": "lookup plugin (passwordstore) not found"}
fatal: [mastery.example.name]: FAILED! => {"failed": true, "msg": "lookup plugin (passwordstore) not found"}
to retry, use: --limit #/etc/ansible/roles/test-role/main.retry
Any guidance on what I'm missing? It may just be the way in which I'm trying to add the custom module, but any guidance would be appreciated.
It's a lookup plugin (not a module), so it should go into a directory named lookup_plugins (not library).
Alternatively, add the path to the cloned repository in ansible.cfg using the lookup-plugins setting.