Communication between 2 pods in Kubernetes - automation

I created 2 pods with the following yaml :
{
"kind": "Pod",
"apiVersion": "v1",
"metadata": {
"name":nginx-123,
"labels": {
"name": nginx-123
}
},
"spec": {
"containers": [
{
"name": nginx-123,
"image":nginx,
"ports": [
{
"containerPort": 80
}
]
}
]
}
}
I would like to transfer data between the 2 pods in order to see the UI changes..
Any idea ?

You can use kubectl cp to copy files and directories between pods.
However in your case you probably want to create two deployments and copy files from pods of one deployments to pods of another deployment. Otherwise the service could point to the same pods.

Related

semantic-release in a GitLab pipeline with multiple users

I'm running a semantic-release job in a GitLab pipeline, it works great but only for my user (I configured it). No one else seems able to trigger a release, even if I merge their code. No errors, everything seems to run smoothly. I'm assuming there's some kind of authentication issue and/or everyone needs their own token or something like that? (I've only configured a token via my account and I'm not sure how I'd instruct someone to do that for multiple accounts in GitLab.)
The pipeline looks like this:
variables:
GL_TOKEN: $GL_TOKEN
stages:
- release
publish:
image: node:lts-alpine
stage: release
before_script:
- apk update
- apk add zip unzip git
- npm ci
script:
- npm run build
- npx semantic-release
only:
refs:
- main
and the config (in the package.json) is:
"release": {
"branches": [
"main"
],
"plugins": [
"#semantic-release/commit-analyzer",
"#semantic-release/release-notes-generator",
[
"#google/semantic-release-replace-plugin",
{
"replacements": [
{
"files": [
"style.css"
],
"from": "Version: .*",
"to": "Version: ${nextRelease.version}",
"results": [
{
"file": "style.css",
"hasChanged": true,
"numMatches": 1,
"numReplacements": 1
}
],
"countMatches": true
},
{
"files": [
"package.json"
],
"from": "\"version\": \".*\",",
"to": "\"version\": \"${nextRelease.version}\",",
"results": [
{
"file": "package.json",
"hasChanged": true,
"numMatches": 1,
"numReplacements": 1
}
],
"countMatches": true
}
]
}
],
[
"#semantic-release/git",
{
"assets": [
"style.css",
"package.json"
],
"message": "chore(release): ${nextRelease.version} [skip ci]"
}
],
[
"#semantic-release/exec",
{
"prepareCmd": "node bin/makezip.js"
}
],
[
"#semantic-release/gitlab",
{
"assets": [
{
"path": "file.zip",
"label": "compiled release"
}
]
}
]
]
}
I would suggest creating token per project and using that instead of your personal token.
You can create project token from project Settings > Access Token
This should solve your issue.

dotnet-monitor and OpenTelemetry?

I'm learning OpenTelemetry and I wonder how dotnet-monitor is connected with OpenTelemetry (Meter). Are those things somehow connected or maybe dotnet-monitor is just custom MS tools that is not using standards from OpenTelemetry (API, SDK and exporters).
If you run dotnet-monitor on your machine it exposes the dotnet metrics in Prometheus format which mean you can set OpenTelemetry collector to scrape those metrics
For example in OpenTelemetry-collector-contrib configuration
receivers:
prometheus_exec:
exec: dotnet monitor collect
port: 52325
Please note that for dotnet-monitor to run you need to create a setting.json in theis path:
$XDG_CONFIG_HOME/dotnet-monitor/settings.json
If $XDG_CONFIG_HOME is not defined, create the file in this path:
$HOME/.config/dotnet-monitor/settings.json
If you want to identify the process by its PID, write this into settings.json (change Value to your PID):
{
"DefaultProcess": {
"Filters": [{
"Key": "ProcessId",
"Value": "1"
}]
}
}
If you want to identify the process by its name, write this into settings.json (change Value to your process name):
{
"DefaultProcess": {
"Filters": [{
"Key": "ProcessName",
"Value": "iisexpress"
}]
},
}
In my example I used this configuration:
{
"DefaultProcess": {
"Filters": [{
"Key": "ProcessId",
"Value": "1"
}]
},
"Metrics": {
"Providers": [
{
"ProviderName": "System.Net.Http"
},
{
"ProviderName": "Microsoft-AspNetCore-Server-Kestrel"
}
]
}
}

AWS Code Build - cached DOWNLOAD_SOURCE taking too long

I am trying to improve the build speed of one of my projects on CodeBuild. The project uses a Github source provider and source caching of type local is enabled.
The first time I ran the build it took 103 secs. I ran it again immediately after the first one finished expecting it to run in a few seconds due to source caching, but it took 60 secs.
What I am missing here? Is the cache not working? If it is working, why does it take that long on the second run?
Thanks
Project Details:
{
"projectsNotFound": [],
"projects": [
{
"environment": {
"computeType": "BUILD_GENERAL1_LARGE",
"imagePullCredentialsType": "SERVICE_ROLE",
"privilegedMode": true,
"image": "111669150171.dkr.ecr.us-east-1.amazonaws.com/***********/ep-build-env:latest",
"environmentVariables": [
{
"type": "PLAINTEXT",
"name": "NEXUS_URI",
"value": "http://***************"
},
{
"type": "PLAINTEXT",
"name": "REGISTRY",
"value": "111669150171.dkr.ecr.us-east-1.amazonaws.com/*********"
}
],
"type": "LINUX_CONTAINER"
},
"timeoutInMinutes": 60,
"name": "StorefrontApi",
"serviceRole": "arn:aws:iam::111669150171:role/CodeBuild-ECRReadOnly",
"tags": [],
"artifacts": {
"type": "NO_ARTIFACTS"
},
"lastModified": 1571227097.581,
"cache": {
"type": "LOCAL",
"modes": [
"LOCAL_DOCKER_LAYER_CACHE",
"LOCAL_SOURCE_CACHE",
"LOCAL_CUSTOM_CACHE"
]
},
"vpcConfig": {
"subnets": [
"subnet-fd7f958b"
],
"vpcId": "vpc-71e3f414",
"securityGroupIds": [
"sg-19b65e6c",
"sg-9e28e9f9"
]
},
"created": 1571082681.262,
"sourceVersion": "refs/heads/ep-mysql",
"source": {
"buildspec": "version: 0.2\n\nphases:\n build:\n commands:\n - env\n - cd extensions\n - mvn --settings $CODEBUILD_SRC_DIR_DEVOPS_WINE/pipelines/storefront/build-war/settings.xml --projects storefront/ext-storefront-webapp -am -DskipAllTests clean install\n\nartifacts:\n secondary-artifacts:\n storefront-war:\n base-directory: $CODEBUILD_SRC_DIR/extensions/storefront/ext-storefront-webapp/target\n files:\n - \"*.war\"\n\ncache:\n paths:\n - '/root/.m2/**/*'\n - '/root/.npm/**/*'",
"insecureSsl": false,
"gitSubmodulesConfig": {
"fetchSubmodules": false
},
"location": "https://github.com/*****************.git",
"gitCloneDepth": 1,
"type": "GITHUB",
"reportBuildStatus": false
},
"badge": {
"badgeEnabled": false
},
"queuedTimeoutInMinutes": 480,
"secondaryArtifacts": [],
"logsConfig": {
"s3Logs": {
"status": "DISABLED",
"encryptionDisabled": false
},
"cloudWatchLogs": {
"status": "ENABLED"
}
},
"secondarySources": [
{
"insecureSsl": false,
"gitSubmodulesConfig": {
"fetchSubmodules": false
},
"location": "https://github.com/*****************.git",
"sourceIdentifier": "DEVOPS_WINE",
"gitCloneDepth": 1,
"type": "GITHUB",
"reportBuildStatus": false
}
],
"encryptionKey": "arn:aws:kms:us-east-1:111669150171:alias/aws/s3",
"arn": "arn:aws:codebuild:us-east-1:111669150171:project/StorefrontApi",
"secondarySourceVersions": [
{
"sourceVersion": "refs/heads/staging",
"sourceIdentifier": "DEVOPS_WINE"
}
]
}
]
}
Apparently, at time of writing, CodeBuild does not use the native git client to fetch the source from GitHub. I understand that the CodeBuild internal teams have an internal feature request to move from whatever they're using to the native git client to improve performance.
Does your repository, by change, have lots of large files in its history? You can use this answer for a command to run to analyze your repository.
If you have lots of large files in your history and you're able to remove them, you can then use a tool like BFG Repo Cleaner to rewrite history. That should speed up the DOWNLOAD_SOURCE phase.
Also, if you have a dedicated support plan with AWS, you should reach out to your TAM to upvote the feature request to move to native git for GitHub source downloads.

Create env fails when using a daemonset to create processes in Kubernetes

I want to deploy a software in to nodes with daemonset, but it is not a docker app. I created a daemonset json like this :
"template": {
"metadata": {
"creationTimestamp": null,
"labels": {
"app": "uniagent"
},
"annotations": {
"scheduler.alpha.kubernetes.io/tolerations": "[{\"key\":\"beta.k8s.io/accepted-app\",\"operator\":\"Exists\", \"effect\":\"NoSchedule\"}]"
},
"enable": true
},
"spec": {
"restartPolicy": "Always",
"terminationGracePeriodSeconds": 30,
"dnsPolicy": "ClusterFirst",
"securityContext": {},
"processes": [
{
"name": "foundation",
"package": "xxxxx",
"resources": {
"limits": {
"cpu": "100m",
"memory": "1Gi"
}
},
"lifecyclePlan": {
"kind": "ProcessLifecycle",
"namespace": "engb",
"name": "app-plc"
},
"env": [
{
"name": "SECRET_USERNAME",
"valueFrom": {
"secretKeyRef": {
"name": "key-secret",
"key": "uniagentuser"
}
}
},
{
"name": "SECRET_PASSWORD",
"valueFrom": {
"secretKeyRef": {
"name": "key-secret",
"key": "uniagenthash"
}
}
}
]
},
when the app deploy succeeds, the env variables do not exist at all.
What should I do to solve this problem?
Thanks
Daemon Sets have to be docker containers. You can't have non-containerized programs run as Daemon Sets. https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/ Kubernetes only launches containers.
Also in your YAML manifest file, I see a "processes" key and I have reason to believe it's not a valid manifest file, so I doubt you deployed it successfully.
You have not pasted the "full" YAML file, but I'm guessing the "template" key at the beginning is the spec.template key of the file.
Run kubectl explain daemonset.spec.template.spec and you'll see that there is no "processes" field.

composer / satis svn repository http basic authentication

I am trying to create a composer package repository for my company using satis.
My svn repositories are acessed via http (apache svn).
I am trying to add this to my config.json of satis
{
"name": "packages",
"homepage": "http://packages.example.org",
"repositories": [
{ "type": "svn",
"url": "myrepourl"
}
],
"require-all": true
}
THe problem is that I cant authenticate in the repository:
Repository could not be processed, svn: OPTIONS of authorization failed. basic authentication rejected.
How can I pass the username/password to satis?.
Thank you
According to composer documentation, you have to put your user key files or your certificate in your project, not in the satis configuration :
Using SSH :
{
"repositories": [
{
"type": "composer",
"url": "ssh2.sftp://example.org",
"options": {
"ssh2": {
"username": "composer",
"pubkey_file": "/home/composer/.ssh/id_rsa.pub",
"privkey_file": "/home/composer/.ssh/id_rsa"
}
}
}
]
}
Using Certificate :
{
"repositories": [
{
"type": "composer",
"url": "https://example.org",
"options": {
"ssl": {
"cert_file": "/home/composer/.ssl/composer.pem",
}
}
}
]
}