I am experimenting with Skaffold and IntelliJ to develop directly in Kubernetes, but I am having trouble with maven, when IntelliJ try to initialize the environment following erroer occurs in
Running "bash -c curl --fail --show-error --silent --location --retry 3
https://github.com/AdoptOpenJDK/openjdk11-binaries/releases/download/jdk-11.0.10%2B9/OpenJDK11U-
jdk_x64_linux_hotspot_11.0.10_9.tar.gz | tar xz --directory /layers/google.java.runtime/java --
strip-components=1"
[builder] Done "bash -c curl --fail --show-error --silent --location --retry..." (59.3720683s)
[builder] === Java - Maven (google.java.maven#0.9.0) ===
[builder] Installing Maven v3.6.3
[builder] Running "/layers/google.java.maven/maven/bin/mvn clean package --batch-mode -DskipTests --
quiet"
[builder] [ERROR] [ERROR] Some problems were encountered while processing the POMs:
The problem is, some of my Spring Boot Application dependencies are defined in our Nexus Repository and that is defined mirror in my maven settings.xml and this process does not know that mirror configuration and I can't find a way to configure that for skaffold.
I try set settings.xml in skaffold.yml as following
apiVersion: skaffold/v2beta11
kind: Config
build:
artifacts:
- image: myproject/myapp
jib:
args:
- --settings=C:\maven\conf\settings.xml
tagPolicy:
sha256: {}
Anybody had any idea how to let 'google.java.maven' to use my mirror configuration?
Thx for answers...
Skaffold supports three builders work out of the box for Java apps: Jib, Buildpacks, and Docker. The Jib builder will be easiest for your needs.
Jib builds run on your host machine (vs within a containerized environment). Because Skaffold's Jib builder just invokes Maven or Gradle directly, they use your account settings with no additional configuration required (specifically your $HOME/.m2/settings.xml and your artifact cache in $HOME/.m2/repository). Your skaffold.yaml above just needs a small indentation tweak and it should all work:
apiVersion: skaffold/v2beta11
kind: Config
build:
artifacts:
- image: myproject/myapp
jib: {}
tagPolicy:
sha256: {}
You can see a working example in the Skaffold examples.
Docker and Buildpacks builds are run within a container: that is, the source is copied into the container. As a result, you can't reference files outside of the build context, like your $HOME/.m2/settings.xml. You could create a model settings.xml within your source directory and reference that file, and then use environment variables or build-arguments to pass in usernames and passwords. But it becomes quite involved.
We have an open issue to allow mounting directories as volumes for the Buildpacks builder, and we should be able to do the same for the Docker builder. That functionality would make it easier to support your situation if you really wanted to use Buildpacks or Docker.
Related
I've configured IntelliJ to use python via a stack I've defined in docker-compose. I'm configured my project to execute my pytest via docker-compose so that I can use the debugger. However, I've discovered that after the initial run, when I change my code and re-run my tests, pytest is not seeing my changes, but rather executing a cached version of the code.
The only way I've discovered to get around this is to invoke the File menu option Invalidate Caches and Restart. This is annoying.
This my compose file:
networks:
app: {}
services:
item-set-definitions:
build:
context: /Users/kudrykma/ghoildex/kudrykma/analytics/sa-item-set-definitions
target: build
command:
- /bin/bash
image: item-sets:test
networks:
app: {}
volumes:
- source: /Users/kudrykma/ghoildex/kudrykma/analytics/sa-item-set-definitions
target: /project
type: bind
version: '3.9'
In the pytest Run configuration I've tried adding -force-recreate option in the docker-compose Command and options field but IntelliJ won't recognize it.
Does anyone know how I can configure IntelliJ to not cache any of my source file so that pytest will see my changed code?
Thank you
When I run mvn javadoc:javadoc locally, it gives me a bunch of warnings—empty #return tags, unknown tags etc.—but eventually builds the Javadoc tree.
On GitLab CI, I have the following in .gitlab-ci-yml:
variables:
MAVEN_OPTS: "-Dhttps.protocols=TLSv1.2 -Dmaven.repo.local=$CI_PROJECT_DIR/.m2/repository -Dorg.slf4j.simpleLogger.log.org.apache.maven.cli.transfer.Slf4jMavenTransferListener=WARN -Dorg.slf4j.simpleLogger.showDateTime=true -Djava.awt.headless=true"
# [...]
deploy:jdk8:
stage: deploy
script:
- if [ ! -f ci_settings.xml ];
then echo "CI settings missing\! If deploying to GitLab Maven Repository, please see https://docs.gitlab.com/ee/user/project/packages/maven_repository.html#creating-maven-packages-with-gitlab-cicd for instructions.";
fi
- 'mvn $MAVEN_CLI_OPTS deploy -s ci_settings.xml'
- 'mvn javadoc:javadoc'
- 'cp -r target/site/apidocs public/javadoc/dev'
artifacts:
paths:
- public
only:
- master
- dev
Here, Javadoc generation fails, with the previously mentioned warnings being reported as errors.
I am ultimately going to fix these things, but in the meantime, I would like Javadoc on CI to behave like its local counterpart. Where is the setting to accomplish that?
Changing the mvn javadoc:javadoc line as follows did the trick for me:
- 'mvn javadoc:javadoc -DadditionalJOption=-Xdoclint:none'
Still not sure why this sems to be the default behavior on my local Maven installation but not on GitLab, but at least this solves my issue for now.
Gitlab provides a .gitlab-ci.yml template for building and publishing images to its own registry (click "new file" in one of your project, select .gitlab-ci.yml and docker). The file looks like this and it works out of the box :)
# This file is a template, and might need editing before it works on your project.
# Official docker image.
image: docker:latest
services:
- docker:dind
before_script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
build-master:
stage: build
script:
- docker build --pull -t "$CI_REGISTRY_IMAGE" .
- docker push "$CI_REGISTRY_IMAGE"
only:
- master
build:
stage: build
script:
- docker build --pull -t "$CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG" .
- docker push "$CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG"
except:
- master
But by default, this will publish to gitlab's registry. How can we publish to docker hub instead?
No need to change that .gitlab-ci.yml at all, we only need to add/replace the environment variables in project's pipeline settings.
1. Find the desired registry url
Using hub.docker.com won't work, you'll get the following error:
Error response from daemon: login attempt to https://hub.docker.com/v2/ failed with status: 404 Not Found
Default docker hub registry url can be found like this:
docker info | grep Registry
Registry: https://index.docker.io/v1/
index.docker.io is what I was looking for.
2. Set the environment variables in gitlab settings
I wanted to publish gableroux/unity3d images using gitlab-ci, here's what I used in Gitlab's project > Settings > CI/CD > Variables
CI_REGISTRY_USER=gableroux
CI_REGISTRY_PASSWORD=********
CI_REGISTRY=docker.io
CI_REGISTRY_IMAGE=index.docker.io/gableroux/unity3d
CI_REGISTRY_IMAGE is important to set.
It defaults to registry.gitlab.com/<username>/<project>
regsitry url needs to be updated so use index.docker.io/<username>/<project>
Since docker hub is the default registry when using docker, you can also use <username>/<project> instead. I personally prefer when it's verbose so I kept the full registry url.
This answer should also cover other registries, just update environment variables accordingly. 🙌
To expand on the GabLeRoux's answer,
I had issues on the pushing stage of the GitLab CI build:
denied: requested access to the resource is denied
By changing my CI_REGISTRY to docker.io (remove the index.) I was able to successfully push.
I have setup drone.io locally and created a .drone.yml for CI build. But I found drone removes the docker container after finishing the build. Whether it support reusing the docker container? I am working on gradle project and the initial build takes a long time to download java dependencies.
UPDATE1
I used below command to set the admin user on running drone-server container.
docker run -d \
-e DRONE_GITHUB=true \
-e DRONE_GITHUB_CLIENT="xxxx" \
-e DRONE_GITHUB_SECRET="xxxx" \
-e DRONE_SECRET="xxxx" \
-e DRONE_OPEN=true \
-e DRONE_DATABASE_DRIVER=mysql \
-e DRONE_DATABASE_DATASOURCE="root:root#tcp(mysql:3306)/drone?parseTime=true" \
-e DRONE_ADMIN="joeyzhao0113" \
--restart=always \
--name=drone-server \
--link=mysql \
drone/drone:0.5
After doing this, I use the user joeyzhao0113 to login drone server but failed to enable the Trusted flag on the setting page. The popup message dialog shows setting successfully see below screenshot. But the flag keep showing disabled always.
No, it is not possible to re-use a Docker container for your Drone build. Build containers are ephemeral and are destroyed at the end of every build.
That being said, it doesn't mean your problem cannot be solved.
I think a better way to phrase this question would be "how do I prevent my builds from having to re-download dependencies"? There are two solutions to this problem.
Option 1, Cache Plugin
The first, recommended solution, is to use a plugin to cache and restore your dependencies. Cache plugins such as the volume cache and s3 cache are community contributed plugins.
pipeline:
# restores the cache from a local volume
restore-cache:
image: drillster/drone-volume-cache
restore: true
mount: [ /drone/.gradle, /drone/.m2 ]
volumes:
- /tmp/cache:/cache
build:
image: maven
environment:
- M2_HOME=/drone/.m2
- MAVEN_HOME=/drone/.m2
- GRADLE_USER_HOME=/drone/.gradle
commands:
- mvn install
- mvn package
# rebuild the cache in case new dependencies were
# downloaded during your build
rebuild-cache:
image: drillster/drone-volume-cache
rebuild: true
mount: [ /drone/.gradle, /drone/.m2 ]
volumes:
- /tmp/cache:/cache
Option 2, Custom Image
The second solution is to create a Docker image with your dependencies, publish to DockerHub, and use this as your build image in your .drone.yml file.
pipeline:
build:
image: some-image-with-all-my-dependencies
commands:
- mvn package
I setup drone.io instance locally and use it as our CI environment. I need to setup the docker container memory for running my test cases. Below is my .drone.yml file.
pipeline:
build:
image: centor
commands:
- mvn clean install
Is there a way for me to set the maximum memory in the docker container?
The .drone.yml file is a superset of the docker-compose file and supports the mem_limit field [1]
pipeline:
build:
image: golang
commands:
- go build
- go test
mem_limit: 1000000000
Note that this field is only available in Drone version 0.5 and higher. So unfortunately it is not backported to older versions of Drone.
[1] https://docs.docker.com/compose/compose-file/#/cpushares-cpuquota-cpuset-domainname-hostname-ipc-macaddress-memlimit-memswaplimit-oomscoreadj-privileged-readonly-restart-shmsize-stdinopen-tty-user-workingdir