How do I use a node selector with a build config in openshift? - openshift-origin

I am running a large world spanning openshift cluster. When I run a build from a BuildConfig it will randomly assign the build to any node in the entire cluster. This is problematic as many regions have higher latency which dramatically slows down build times and image uploads. I can't find any information in the documentation on using node selector tags at this level. I have tried adding openshift.io/node-selector: dc=mex01 to the annotations as it is done with project level node-selectors to no avail. Any help would be great. Thanks!

Project node selectors are the only way to control where builds happen at the present time.

Since this is the Question that shows up first on Google:
This is now possible (since 1.3 apparently): https://docs.openshift.org/latest/dev_guide/builds/advanced_build_operations.html#dev-guide-assigning-builds-to-nodes

To elaborate a bit on mhutter's answer, here are example yaml fragments using node selectors:
a BuildConfig:
apiVersion: "v1"
kind: "BuildConfig"
metadata:
name: "sample-build"
spec:
nodeSelector:
canbuild: yes
and a Node:
apiVersion: v1
kind: Node
metadata:
creationTimestamp: null
labels:
beta.kubernetes.io/arch: amd64
beta.kubernetes.io/os: linux
kubernetes.io/hostname: mybestnodeever
canbuild: yes
Since OCPv3.6 there are the taints and tolerations, which can be applied to nodes and pods, but I haven't yet found any docs on applying tolerations to the build configs (or on whether they propagate to the builder pods).
https://docs.openshift.com/container-

Related

How can I use Akka.NET lighthouse with hyperion

Currently I'm using the akka.net lighthouse docker image which is on dockerhub. Together with Akka.Bootstrap.Docker it's nice to override akka hocon configuration from the environment variables. I've set the following environment variables in my k8s deployment file
- name: AKKA__ACTOR__SERIALIZERS__HYPERION
value: "\"Akka.Serialization.HyperionSerializer, Akka.Serialization.Hyperion\""
- name: AKKA__ACTOR__SERIALIZATION-BINDINGS__System__Object
value: hyperion
But if I want to enable hyperion serialization it fails with the following message:
The type name for serializer 'hyperion' did not resolve to an actual Type: 'Akka.Serialization.HyperionSerializer, Akka.Serialization.Hyperion'
The documentation of Akka.NET Lighthouse is very scarce so does anyone of you know how I can use hyperion serialization with Akka.NET lighthouse?
Akka.NET is trying to load hyperion serializer via Type.GetType("Akka.Serialization.HyperionSerializer, Akka.Serialization.Hyperion") call, and fails to do that, because Lighthouse docker image does not include Akka.Serialization.Hyperion package.
So what you need to do is:
Clone Lighthouse repo and add Akka.Serialization.Hyperion package to Lighthouse project references
Build your own docker image and use it instead.

ArgoCD with kustomize to replace images during runtime

We are trying to deploy few deployment files with argocd app create command. Our yaml file contains a parameter for specifying the image name dynamically. We are looking at passing the image value with argocd cli to replace this variable during runtime. Parameter overrides of argocd don't seem to work for non-helm deployments.
Below is the deployment.yaml file which hold the placeholders for the image:
spec:
containers:
- name: helloworld-gitlab
image: '###IMAGE###'
kustomization.yaml is as below:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- gitlab_msd_deployment.yaml
- gitlab_msd_service.yaml
We are passing the below values in the argocd command.
argocd app create helloworld-gitlab --repo https://example.git --path ./k8s-deployments/ --dest-server https://kubernetes.default.svc --dest-namespace ${NAMESPACE} --kustomize-image $IMAGE
But the pod has InvalidImageName state with the error below:
Failed to apply default image tag "###IMAGE###": couldn't parse image reference "###IMAGE###": invalid reference format: repository name must be lowercase
Any idea how we can get the placeholder replaced with the value sent in the argocd app create command?
Any other ideas?
I could get this working as below (with help from the slack channel). File definitions are here.
argocd app create mykustomize-guestbook --repo https://github.com/mayzhang2000/argocd-example-apps.git --path kustomize-guestbook --dest-namespace default --dest-server https://kubernetes.default.svc --kustomize-image myImage=gcr.io/heptio-images/ks-guestbook-demo:0.1

DBT problem with yml file: Profile my-bigquery-db in profiles.yml is empty

I am doing the DBT hello world tutorial found here, and I have created my first project on a windows machine. My profiles.yml file looks like this:
my-bigquery-db:
target: dev
outputs:
dev:
type: bigquery
method: service-account
project: abiding-operand-286102
dataset: xxxDBTtestO1
threads: 1
keyfile: C:\Users\xxx\.dbt\abiding-operand-286102-ed9e3c9c13cd.json
timeout_seconds: 300
when I execute dbt run I get:
Running with dbt=0.17.2 Encountered an error while reading profiles: ERROR Runtime Error
dbt encountered an error while trying to read your profiles.yml
file.
Profile my-bigquery-db in profiles.yml is empty
Defined profiles:
my-bigquery-db
target
outputs
dev
type
method
project
dataset
threads
keyfile
timeout_seconds
Any idea?
At first glance from both your code and the source walkthrough, this is just a yml config problem. YML is a markup language that is white-space sensitive. And by just looking at the example that you may have pulled from - it doesn't look appropriately white spaced to me.
I'm not sure if you can simply copy from the below but it might be worth a shot.
my-bigquery-db:
target: dev
outputs:
dev:
type: bigquery
method: service-account
project: abiding-operand-286102
dataset: xxxDBTtestO1
threads: 1
keyfile: C:\Users\xxx\.dbt\abiding-operand-286102-ed9e3c9c13cd.json
timeout_seconds: 300
Basically - your dbt profile.yml needs to be setup with the sections at certain levels (not unlike python indentation or any other white spacing scheme).

Are the any buildpacks that generate Spring Webflux optiimized OCI Images?

I'm unable to find a buildpack builder that lends itself to Spring Webflux applications. Paketo, for example, has little room for customization and out of the box includes buildpacks that are not needed for a typical Webflux app (e.g. apache-tomcat). Are there any buildpacks tailored for webflux/jvm-reactive applications?
It doesn't look like you need to do anything specific here for Spring Webflux, the Java related CNBs will just do the right thing.
I took a sample app that I created with Spring Initializer (just added the Webflux starter) and ran pack build against it (if you run ./mvnw spring-boot:build-image, you should get very similar output).
That gave me the following output:
===> DETECTING
[detector] 6 of 17 buildpacks participating
[detector] paketo-buildpacks/bellsoft-liberica 3.2.0
[detector] paketo-buildpacks/maven 3.1.0
[detector] paketo-buildpacks/executable-jar 3.1.0
[detector] paketo-buildpacks/apache-tomcat 2.2.0
[detector] paketo-buildpacks/dist-zip 2.2.0
[detector] paketo-buildpacks/spring-boot 3.2.0
At first glance, that might seem odd. Why is Tomcat there? Looking at the Tomcat CNB, that is expected though. The Tomcat CNB is always going to return a successful detection.
Note how pass is hard coded to true.
result := libcnb.DetectResult{
Pass: true,
Plans: []libcnb.BuildPlan{
{
Requires: []libcnb.BuildPlanRequire{
{Name: "jre", Metadata: map[string]interface{}{"launch": true}},
{Name: "jvm-application"},
},
},
},
}
The reason this is OK is because at build time, the Tomcat CNB will immediately exit (no-op) if there is no WEB-INF directory, and for a Spring WebFlux app there won't be.
file := filepath.Join(context.Application.Path, "WEB-INF")
if _, err := os.Stat(file); err != nil && !os.IsNotExist(err) {
return libcnb.BuildResult{}, fmt.Errorf("unable to stat file %s\n%w", file, err)
} else if os.IsNotExist(err) {
return libcnb.BuildResult{}, nil
}
You can confirm this by looking at the full output of pack build and looking for the presence of Paketo Apache Tomcat Buildpack x.x.x (where x.x.x is the version number). If the Tomcat CNB were running and performing any work, you'd see that line output.
The paketo-buildpack/dist-zip CNB works the same way, it's just looking for <APPLICATION_ROOT>/*/bin/* to exist.
So in summary, the image that's generated should be specific to your application and shouldn't contain unnecessary stuff. If you're running Spring WebFlux, then you shouldn't have Tomcat installed into your image. In addition, you get all the optimizations provided by using Cloud Native buildpacks.

Drone.io auto_tag with branch name

Using the drone docker plugin in order to create my cloud images, I would like to simplify the workflow by having drone automatically tagging my images depending of the git branch name I'm working with.
I saw a auto_tag but unfortunately it always tag my images as "latest".
###
# Tag deployment
# Docker image
###
push-tag-news:
image: plugins/docker
registry: docker.domain.com:5000
secrets: [docker_username, docker_password]
repo: docker.domain.com:5000/devs/news
auto_tag: true # Or how to specify the current branch for the tags: option?
when:
exclude: [master, dev]
has anyone tried to do something similar?
I'm using drone 0.8
The auto_tag uses the repository/git tags seems to me you are looking to set custom docker image tags.
You can use any of these variables http://docs.drone.io/environment-reference/
Try using DRONE_COMMIT_BRANCH
build-docker-image:
image: plugins/docker
repo: myname/myrepo
secrets: [ docker_username, docker_password ]
tags:
- ${DRONE_COMMIT_BRANCH}
- latest