drone "Insufficient privileges to use privileged mode" - drone.io

i had write a .drone.yml in my gogs git repository. but when i push the git change , drone web tell me Insufficient privileges to use privileged mode. how can i fix it?
this is my .drone.yml:
pipeline:
build:
image: test-harbor.cx580.com/centos/centos7:Beat2.0
privileged: true
commands:
- mkdir -p /data/k8s/drone/jar-db/
- \cp README.md /data/k8s/drone/jar-db/
- ls /data/k8s/drone/jar-db/
push:
image: plugins/docker
repo: test-harbor.cx580.com/centos/centos7:Beat2.0
registry: test-harbor.cx580.com
username: ci
password: '1qaz!QAZ'
tags:
- latest
i had search in google, this websize tell me Your repository isn't in the trusted list of repositories. Get in touch with Devops and ask them to trust it but ,how can i trusted the repositorie?
and i get setting in the drone web , and check the Trusted in the settings,but it also failed :
img

Set drone-server env (My repositorie is GitLab)
...
- DRONE_OPEN=false
- DRONE_ADMIN=<your gitlab username>
- DRONE_GITLAB_PRIVATE_MODE=true
...
Enable drone settings
Settings -> Trusted like this

Related

ECR login fails in gitlab runner

I'm trying to deploy ECS with task definition and I'm using ECR to store my docker image in was. When I try to login ECR in GitLab CI/CD with shared runner. I'm getting errors.
image: docker:19.03.10
services:
- docker:dind
variables:
REPOSITORY_URL: <REPOSITORY_URL>
TASK_DEFINITION_NAME: <Task_Definition>
CLUSTER_NAME: <CLUSTER_NAME>
SERVICE_NAME: <SERVICE_NAME>
before_script:
- apk add --no-cache curl jq python py-pip
- pip install awscli
- aws configure set aws_access_key_id $AWS_ACCESS_KEY_ID
- aws configure set aws_secret_access_key $AWS_SECRET_ACCESS_KEY
- aws configure set region $AWS_DEFAULT_REGION
- $(aws ecr get-login --no-include-email --region "${AWS_DEFAULT_REGION}")
- IMAGE_TAG="$(echo $CI_COMMIT_SHA | head -c 8)"
stages:
- build
- deploy
build:
stage: build
script:
- echo "Building image..."
- docker build -t $REPOSITORY_URL:latest .
- echo "Tagging image..."
- docker tag $REPOSITORY_URL:latest $REPOSITORY_URL:$IMAGE_TAG
- echo "Pushing image..."
- docker push $REPOSITORY_URL:latest
- docker push $REPOSITORY_URL:$IMAGE_TAG
Error details:
There are two approaches that you can take to access a private registry. Both require setting the CI/CD variable DOCKER_AUTH_CONFIG with appropriate authentication information.
Per-job: To configure one job to access a private registry, add DOCKER_AUTH_CONFIG as a CI/CD variable.
Per-runner: To configure a runner so all its jobs can access a private registry, add DOCKER_AUTH_CONFIG as an environment variable in the runner’s configuration.
https://docs.gitlab.com/ee/ci/docker/using_docker_images.html#access-an-image-from-a-private-container-registry
I see the following issues in your config:
docker login is missing
without DOCKER_HOST docker:dind will not work
Please try to follow this tutorial - link, youtube video about the mentioned tutorial is here.

GitHub Actions: How to run remote SSH commands using a self hosted runner

Since I need to use a self-hosted runner, the option to use existing marketplace SSH Actions is not viable because they tend to docker and build an image on GitHub Actions which fails because of a lack of permissions for an unknown user. Existing GitHub SSH Actions such as appleboy fail due to reliance on Docker for me.
Please note: Appleboy & other similar actions work great when using GitHub Actions Runners.
Since it's a self-hosted runner i.e. I have access to it all the time. So using an SSH profile and then using native-run command works pretty well.
To create an SSH profile & use it in GitHub Actions:
Create (if doesn't exist already) a "config" file in "~/.ssh/".
Add a profile in "~/.ssh/config". For example-
Host dev
HostName qa.bluerelay.com
User ec2-user
Port 22
IdentityFile /home/ec2-user/mykey/something.pem
Now to test it, on self-hosted runner run:
ssh dev ls
This should run the ls command inside the dev server.
4. Now since the SSH profile is set up, you can easily run remote SSH commands using GitHub Actions. For example-
name: Test Workflow
on:
push:
branches: ["main"]
jobs:
build:
runs-on: self-hosted
steps:
- uses: actions/checkout#v2
- name: Download Artifacts into Dev
run: |
ssh dev 'cd GADeploy && pwd && sudo wget https://somelink.amazon.com/web.zip'
- name: Deploy Web Artifact to Dev
run: |
ssh dev 'cd GADeploy && sudo ./deploy-web.sh web.zip'
This works like a charm!!
Independently of the runner itself, you would need first to check if SSH does work from your server
curl -v telnet://github.com:22
# Assuming your ~/.ssh/id_rsa.pub is copied to your GitHub profile page
git ls-remote git#github.com:you/YourRepository
Then you can install your runner, without Docker.
Finally, your runner script can execute jobs.<job_id>.steps[*].run commands, using git with SSH URLs.

Why is my static site broken using github action and azure cli to deploy?

I'm trying to deploy my static site to Azure storage but have been having issues getting the site open correctly even though the github action executes without errors and the files seem to be in place. In the browser, index.html seems to load along with the css and js.... but the site does not run properly. The console shows a failure in the js:
The odd thing is that I don't have any issues using the azure storage extension in vscode or using the azure cli:
az storage blob upload-batch --account-name <ACCOUNT_NAME> -d '$web' -s ./dist --connection-string '<CONNECTION_STRING>'
when I deploy from my laptop.
My github action looks like this:
name: Blob storage website CI
on:
push:
branches: [master]
pull_request:
branches: [master]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: npm install
run: |
npm install
- name: npm build
run: |
npm run build
- name: Azure Login
uses: azure/login#v1
with:
creds: ${{ secrets.AZURE_CREDENTIALS }}
- name: Azure CLI script
uses: azure/CLI#v1
with:
azcliversion: latest
inlineScript: |
az storage blob upload-batch --account-name <ACCOUNT_NAME> -d '$web' -s ./dist --connection-string '${{ secrets.BLOB_STORAGE_CONNECTION_STRING }}'
# Azure logout
- name: logout
run: |
az logout
based on this article here.
I thought that it might be due to the azure cli version, but none of the versions I've tried have made a difference.
Any ideas why my site broken using github action and azure cli to deploy?
For anyone interested - I was missing environment variables during the build process in the GitHub Action. I was able to pass these without checking in the .env files using github secrets.
There's now a step in the action to create a .env,
- name: Set Environment Variables
run: |
touch .env
echo ENVIRONMENT_VARIABLE=${{secrets.ENVIRONMENT_VARIABLE}} >> .env
and another to remove it:
- name: Remove Environment Variables
run: |
rm .env

How to publish docker images to docker hub from gitlab-ci

Gitlab provides a .gitlab-ci.yml template for building and publishing images to its own registry (click "new file" in one of your project, select .gitlab-ci.yml and docker). The file looks like this and it works out of the box :)
# This file is a template, and might need editing before it works on your project.
# Official docker image.
image: docker:latest
services:
- docker:dind
before_script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
build-master:
stage: build
script:
- docker build --pull -t "$CI_REGISTRY_IMAGE" .
- docker push "$CI_REGISTRY_IMAGE"
only:
- master
build:
stage: build
script:
- docker build --pull -t "$CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG" .
- docker push "$CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG"
except:
- master
But by default, this will publish to gitlab's registry. How can we publish to docker hub instead?
No need to change that .gitlab-ci.yml at all, we only need to add/replace the environment variables in project's pipeline settings.
1. Find the desired registry url
Using hub.docker.com won't work, you'll get the following error:
Error response from daemon: login attempt to https://hub.docker.com/v2/ failed with status: 404 Not Found
Default docker hub registry url can be found like this:
docker info | grep Registry
Registry: https://index.docker.io/v1/
index.docker.io is what I was looking for.
2. Set the environment variables in gitlab settings
I wanted to publish gableroux/unity3d images using gitlab-ci, here's what I used in Gitlab's project > Settings > CI/CD > Variables
CI_REGISTRY_USER=gableroux
CI_REGISTRY_PASSWORD=********
CI_REGISTRY=docker.io
CI_REGISTRY_IMAGE=index.docker.io/gableroux/unity3d
CI_REGISTRY_IMAGE is important to set.
It defaults to registry.gitlab.com/<username>/<project>
regsitry url needs to be updated so use index.docker.io/<username>/<project>
Since docker hub is the default registry when using docker, you can also use <username>/<project> instead. I personally prefer when it's verbose so I kept the full registry url.
This answer should also cover other registries, just update environment variables accordingly. 🙌
To expand on the GabLeRoux's answer,
I had issues on the pushing stage of the GitLab CI build:
denied: requested access to the resource is denied
By changing my CI_REGISTRY to docker.io (remove the index.) I was able to successfully push.

How to deploy on custom docker registry?

I setup a simple go project and I wish to build and deploy a simple docker image to my private registry.
This is my .drone.yml:
pipeline:
build:
image: golang
commands:
- go build
docker:
image: plugins/docker
username: xxxxxxxxxxx
password: yyyyyyyyyyy
repo: docker.mycompany.it:5000/drone/test
tags: latest
debug: true
But the plugins tries to connect and authenticate to docker registry.
If you are using a custom registry you need to set the registry parameter in the plugin configuration [1]. The registry parameter is provided to the docker login command (e.g. docker login gcr.io)
Example configuration with custom registry:
pipeline:
docker:
image: plugins/docker
repo: index.company.com/foo/bar
registry: index.company.com
[1] source http://plugins.drone.io/drone-plugins/drone-docker/