YAML files sometimes contain templated values in double curly braces, e.g. when used by Helm to configure Kubernetes:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Release.Name }}
namespace: {{ .Release.Namespace }}
The formatting as I have shown it is what I want, and what is standard in Helm files: a space within the inner pair of curly braces, but not between the two curly braces on each side.
Is it possible to configure IntelliJ to respect this formatting style? As far as I can tell, the options (under File > Settings > Editor > Code Style > YAML > Spaces) are either:
Within code braces YES, which would produce { { .Release.Name } }
Within code braces NO, which would produce {{.Release.Name}}
If someone ends up here like me, this is solved by installing the following two plugins from JetBrains:
Kubernetes Plugin: https://plugins.jetbrains.com/plugin/10485-kubernetes
Go Template Plugin: https://plugins.jetbrains.com/plugin/10581-go-template
After installing, the auto formatting respected the following style without further configurations:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Release.Name }}
namespace: {{ .Release.Namespace }}
I installed both plugins "Kubernetes" and "Go Template" but I still had the issue. The solution that worked for me with IntelliJ was:
Disable the YAML bundled plugin;
Restart IntelliJ;
Re-enable the YAML bundled plugin.
...and the IDE formatter does not put a space between double curly braces anymore.
Related
I deployed a Nuxt 3 website as a classic SPA since I don't need SSR for my project. Used nuxt generate and deployed the contents from .output/public/ in Azure static web app. It is successfully running now, but when I'm accessing pages with dynamic routes like user/[id] and hit refresh the page, I'm getting this message:
The resource you are looking for has been removed, had its name changed, or is temporarily unavailable.
nuxt.config.ts
export default defineNuxtConfig({
ssr: false,
});
I'm just really new on Nuxt and in Vue world will appreciate any help guys.
I have followed the MSDoc and able to run the Nuxt JS app without any issues.
Make sure you have followed the same steps as below.
Navigate to the GitHub and create a new repository from nuxt-3-starter.
New repo with code will be generated.
In Azure Portal, create a new Static Web App.
Select the GitHub and provide the Repository and branch details as shown below.
We can also check the Workflow file.
My Workflow:
name: Azure Static Web Apps CI/CD
on:
push:
branches:
- main
pull_request:
types: [opened, synchronize, reopened, closed]
branches:
- main
jobs:
build_and_deploy_job:
if: github.event_name == 'push' || (github.event_name == 'pull_request' && github.event.action != 'closed')
runs-on: ubuntu-latest
name: Build and Deploy Job
steps:
- uses: actions/checkout#v2
with:
submodules: true
- name: Build And Deploy
id: builddeploy
uses: Azure/static-web-apps-deploy#v1
with:
azure_static_web_apps_api_token: ${{ secrets.AZURE_STATIC_WEB_APPS_API_TOKEN_MANGO_PLANT_0E617661E }}
repo_token: ${{ secrets.GITHUB_TOKEN }} # Used for Github integrations (i.e. PR comments)
action: "upload"
######
# For more information regarding Static Web App workflow configurations, please visit: https://aka.ms/swaworkflowconfig
app_location: "/" # App source code path
api_location: ".output/server"
output_location: ".output/public" # Built app content directory
###### End of Repository/Build Configurations ######
close_pull_request_job:
if: github.event_name == 'pull_request' && github.event.action == 'closed'
runs-on: ubuntu-latest
name: Close Pull Request Job
steps:
- name: Close Pull Request
id: closepullrequest
uses: Azure/static-web-apps-deploy#v1
with:
azure_static_web_apps_api_token: ${{ secrets.AZURE_STATIC_WEB_APPS_API_TOKEN_MANGO_PLANT_0E617661E }}
action: "close"
Make sure the build and deployment are successful. We can check it in GitHub Repository => Actions.
Click on the Workflow file, you can see the Jobs Status.
OutPut:
With dynamic routes - user/[id]
Please Check the code in my GitHub Repository.
We are trying to deploy few deployment files with argocd app create command. Our yaml file contains a parameter for specifying the image name dynamically. We are looking at passing the image value with argocd cli to replace this variable during runtime. Parameter overrides of argocd don't seem to work for non-helm deployments.
Below is the deployment.yaml file which hold the placeholders for the image:
spec:
containers:
- name: helloworld-gitlab
image: '###IMAGE###'
kustomization.yaml is as below:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- gitlab_msd_deployment.yaml
- gitlab_msd_service.yaml
We are passing the below values in the argocd command.
argocd app create helloworld-gitlab --repo https://example.git --path ./k8s-deployments/ --dest-server https://kubernetes.default.svc --dest-namespace ${NAMESPACE} --kustomize-image $IMAGE
But the pod has InvalidImageName state with the error below:
Failed to apply default image tag "###IMAGE###": couldn't parse image reference "###IMAGE###": invalid reference format: repository name must be lowercase
Any idea how we can get the placeholder replaced with the value sent in the argocd app create command?
Any other ideas?
I could get this working as below (with help from the slack channel). File definitions are here.
argocd app create mykustomize-guestbook --repo https://github.com/mayzhang2000/argocd-example-apps.git --path kustomize-guestbook --dest-namespace default --dest-server https://kubernetes.default.svc --kustomize-image myImage=gcr.io/heptio-images/ks-guestbook-demo:0.1
I tried to built github .yml file but I’m getting error like
|GitHub Actions/ Main Workflow
Invalid workflow file
The workflow is not valid. .github/workflows/build.yml (Line: 22, Col: 22): Unexpected symbol: '<hash_value>'. Located at position 9 within expression: secrets.<hash_value>|
CODE
on:
Trigger analysis when pushing in master or pull requests, and when creating
a pull request.
push:
branches:
- master
pull_request:
types: [opened, synchronize, reopened]
name: Main Workflow
jobs:
sonarcloud:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
with:
# Disabling shallow clone is recommended for improving relevancy of reporting
fetch-depth: 0
- name: SonarCloud Scan
uses: SonarSource/sonarcloud-github-action#v1.3
env:
GITHUB_TOKEN: {{ secrets.<hash_value>}} SONAR_TOKEN: {{ secrets.<hash_value>}}
AND
#Configure here general information about the environment, such as SonarQube server connection details for example
#No information about specific project should appear here
#----- Default SonarQube server
sonar.host.url=https://sonarcloud.io/
#----- Default source code encoding
#sonar.sourceEncoding=UTF-8
sonar.organization=blah blah
sonar.projectKey=blah blah
— optional properties —
defaults to project key
sonar.projectName=Toolsdemo
defaults to ‘not provided’
sonar.projectVersion=1.0
Path is relative to the sonar-project.properties file. Defaults to .
sonar.sources=https://github.com/abcd/xyz
Encoding of the source code. Default is default system encoding
sonar.sourceEncoding=UTF-8
Please help . I need to test my code very fast .
when config files changed,the config server did not reload the changes.
It can't monitor the native file changes or something wrong?
spring:
profiles:
active: native
cloud:
config:
server:
native:
search-locations: classpath:/conf/, classpath:/conf/licensingservice/
enter image description here
Since that your configuration are packed in your jar if you change a configuration since that it is saved in your classpath the server do not see the modification.
A better option is save these config in an other place outside the classpath.
you can use a configuration like below:
spring:
application:
name: configserver
cloud:
config:
server:
native:
searchLocations: file://${LOCAL_REPO}
in this way you can pilot the place with the environment variable LOCAL_REPO.
Of course I suggest you for the production to move the configuration repository to a git repository that it is a more suitable production ready choice.
I hope that this can help you
I am running a large world spanning openshift cluster. When I run a build from a BuildConfig it will randomly assign the build to any node in the entire cluster. This is problematic as many regions have higher latency which dramatically slows down build times and image uploads. I can't find any information in the documentation on using node selector tags at this level. I have tried adding openshift.io/node-selector: dc=mex01 to the annotations as it is done with project level node-selectors to no avail. Any help would be great. Thanks!
Project node selectors are the only way to control where builds happen at the present time.
Since this is the Question that shows up first on Google:
This is now possible (since 1.3 apparently): https://docs.openshift.org/latest/dev_guide/builds/advanced_build_operations.html#dev-guide-assigning-builds-to-nodes
To elaborate a bit on mhutter's answer, here are example yaml fragments using node selectors:
a BuildConfig:
apiVersion: "v1"
kind: "BuildConfig"
metadata:
name: "sample-build"
spec:
nodeSelector:
canbuild: yes
and a Node:
apiVersion: v1
kind: Node
metadata:
creationTimestamp: null
labels:
beta.kubernetes.io/arch: amd64
beta.kubernetes.io/os: linux
kubernetes.io/hostname: mybestnodeever
canbuild: yes
Since OCPv3.6 there are the taints and tolerations, which can be applied to nodes and pods, but I haven't yet found any docs on applying tolerations to the build configs (or on whether they propagate to the builder pods).
https://docs.openshift.com/container-