Accessing xpra html5 client behind a ingress controller - reverse-proxy

I am trying to host and remotely access kicad(pcb software) on a kubernetes cluster using html5 client .
dockerfile:
FROM python:3.9.0-slim-buster
RUN apt-get update && apt install -y \
software-properties-common \
wget \
gnupg2
#install xpra
RUN wget -q https://xpra.org/gpg.asc -O- | apt-key add - && \
add-apt-repository "deb https://xpra.org/ buster main" && \
apt-get update && apt-get install -y --no-install-recommends xpra xvfb xterm
##install dependencies
RUN apt-get update && apt install -y \
libx11-dev libxcomposite-dev libxdamage-dev \
libxkbfile-dev \
openssh-client \
sshpass \
python3-paramiko \
dbus-x11 \
python3-requests \
xpra-html5
#install kicad
RUN apt-get update && add-apt-repository -y ppa:kicad/kicad-5.1-releases && \
apt-get install -y --no-install-recommends kicad \
&& rm -rf /var/lib/apt/lists/*
ENV DISPLAY=:0
EXPOSE 8051
CMD xpra start --start=kicad --no-pulseaudio --bind-tcp=0.0.0.0:8051 --html=on && tail -f /dev/null
deployment file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: kicad-deployment
labels:
app: kicad
spec:
replicas: 2
selector:
matchLabels:
app: kicad
template:
metadata:
labels:
app: kicad
spec:
containers:
- name: kicad
image: syashfr/kicad:1.0.0
ports:
- containerPort: 8051
service file:
apiVersion: v1
kind: Service
metadata:
name: kicad-service
spec:
type: LoadBalancer
selector:
app: kicad
ports:
- port: 80
targetPort: 8051
ingress file:
kind: Ingress
metadata:
name: kicad-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /kicad
backend:
serviceName: kicad-service
servicePort: 80
I assume proxy_pass should be automatically on applying ingress.yaml, therefore I have not made changes to nginx.conf (ingress controller) as stated in https://xpra.org/trac/wiki/Nginx
However when I try to access the application: http://ingress_address/kicad, I get the following page instead of the application UI:
It does seem that I am routed to my service, but not to the expected UI. I can however, access the kicad UI through external IP of the service. What am I missing with ingress ?

I've reproduced your issue and solved it by slightly modifying the ingress resource. My ingress object manifest:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: kicad-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
kubernetes.io/ingress.class: nginx
spec:
rules:
- http:
paths:
- path: /kicad/?(.*)
backend:
serviceName: kicad-service
servicePort: 80
Deployment and service yamls remained untouched. When you try to access <ingress-IP>/kicad/ you will see expected UI.

Related

GitHub Actions with hub results in Unauthorized (HTTP 401) Bad credentials

The following exemplary workflow runs without issues:
on: [push]
jobs:
create_release:
runs-on: ubuntu-latest
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
steps:
- name: Checkout
uses: actions/checkout#v2
- name: Create release
run: hub release create -m "$(date)" "v$(date +%s)"
However, some of my CI/CD code needs to run in a container:
on: [push]
jobs:
create_release:
runs-on: ubuntu-latest
container:
image: ubuntu:latest
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
steps:
- name: Install dependencies
run: apt update && apt install -y git hub
- name: Checkout
uses: actions/checkout#v2
- name: Create release
run: hub release create -m "$(date)" "v$(date +%s)"
Now, hub suddenly doesn't work anymore:
Run hub release create -m "$(date)" "v$(date +%s)"
hub release create -m "$(date)" "v$(date +%s)"
shell: sh -e {0}
env:
GITHUB_TOKEN: ***
Error creating release: Unauthorized (HTTP 401)
Bad credentials
Error: Process completed with exit code 1.
The issue was actually with mismatching versions: hub on native ubuntu-latest GitHub Actions was the (as of now) most recent version 2.14.2 while apt install on the ubuntu:latest container installed only version 2.7.0 (from Dec 28, 2018!).
The solution is to install the latest hub binary directly from their GitHub releases page instead of using apt:
on: [push]
jobs:
create_release:
runs-on: ubuntu-latest
container:
image: ubuntu:latest
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
steps:
- name: Install dependencies
run: |
apt update && apt install -y git wget
url="$(wget -qO- https://api.github.com/repos/github/hub/releases/latest | tr '"' '\n' | grep '.*/download/.*/hub-linux-amd64-.*.tgz')"
wget -qO- "$url" | tar -xzvf- -C /usr/bin --strip-components=2 --wildcards "*/bin/hub"
- name: Checkout
uses: actions/checkout#v2
- name: Create release
run: hub release create -m "$(date)" "v$(date +%s)"
After adding sudo, it works for me.
- name: Install Deps
run: |
sudo apt-get update 2> /dev/null || true
sudo apt-get install -y git
sudo apt-get install -y wget
url="$(sudo wget -qO- https://api.github.com/repos/github/hub/releases/latest | tr '"' '\n' | grep '.*/download/.*/hub-linux-amd64-.*.tgz')"
sudo wget -qO- "$url" | sudo tar -xzvf- -C /usr/bin --strip-components=2 --wildcards "*/bin/hub"

How to configure Jaeger agent to send traces to collector on another server

I'm trying to use Jaeger to manage tracing system. Docker is running locally "all-in-one" image with application (on the same host) without any issues. My question is how to configure jaeger agent on host1 that would send traces jaeger collector on another host2. Host2 is configured with "all-in-one". I can see Jaeger UI on host2 but it doesn't seem getting any traces from host1.
Configure tracer:
var configuration = new Configuration("service-name")
.withSampler(Configuration.SamplerConfiguration.fromEnv())
.withReporter(Configuration.ReporterConfiguration.fromEnv());
GlobalTracer.registerIfAbsent(configuration.getTracer());
return openTracer;
Added environment variables in yaml file on host1:
-e JAEGER_AGENT_HOST=jaeger.hostname.com \
-e JAEGER_AGENT_PORT=6831 \
Added jaeger image in yaml file on host2:
$ docker run -d --name jaeger \
-p 5775:5775/udp \
-p 6831:6831/udp \
-p 6832:6832/udp \
-p 5778:5778 \
-p 16686:16686 \
-p 14268:14268 \
-p 9411:9411 \
jaegertracing/all-in-one:latest
Any suggestions will be appreciated.
In my case.
Host2
version: '2'
services:
hotrod:
image: jaegertracing/example-hotrod:1.28
ports:
- '8080:8080'
- '8083:8083'
command: ["-m","prometheus","all"]
environment:
- JAEGER_AGENT_HOST=jaeger-agent
- JAEGER_AGENT_PORT=6831
- JAEGER_SAMPLER_TYPE=remote
- JAEGER_SAMPLING_ENDPOINT=http://jaeger-agent:5778/sampling
depends_on:
- jaeger-agent
jaeger-collector:
image: jaegertracing/jaeger-collector:1.28
command:
- "--cassandra.keyspace=jaeger_v1_dc1"
- "--cassandra.servers=cassandra"
- "--collector.zipkin.host-port=9411"
- "--sampling.initial-sampling-probability=.5"
- "--sampling.target-samples-per-second=.01"
environment:
- SAMPLING_CONFIG_TYPE=adaptive
ports:
- "14269:14269"
- "14268:14268"
- "14250:14250"
- "9411:9411"
restart: on-failure
depends_on:
- cassandra-schema
jaeger-query:
image: jaegertracing/jaeger-query:1.28
command: ["--cassandra.keyspace=jaeger_v1_dc1", "--cassandra.servers=cassandra"]
ports:
- "16686:16686"
- "16687"
restart: on-failure
depends_on:
- cassandra-schema
jaeger-agent:
image: jaegertracing/jaeger-agent:1.28
command: ["--reporter.grpc.host-port=jaeger-collector:14250"]
ports:
- "5775:5775/udp"
- "6831:6831/udp"
- "6832:6832/udp"
- "5778:5778"
restart: on-failure
depends_on:
- jaeger-collector
cassandra:
image: cassandra:4.0
cassandra-schema:
image: jaegertracing/jaeger-cassandra-schema:1.28
depends_on:
- cassandra
host1 is
version: '2'
services:
jaeger-agent:
image: jaegertracing/jaeger-agent:1.28
command: ["--reporter.grpc.host-port=host2:14250"]
ports:
- "5775:5775/udp"
- "6831:6831/udp"
- "6832:6832/udp"
- "5778:5778"
restart: on-failure
it works well for me

Recv failure when I use docker-compose for set up redisDB

sorry but I'm new to redis and dockers and I'm getting stuck.
I want to connect redis to my localhost with docker-compose. When I use docker-compose my web and my redis shows that they are ON but when i try to make curl -L http://localhost:8081/ping for test it I get this message "curl: (56) Recv failure:"
I tryed to change my docker-compose.yaml but is not working
docker-compose:
version: '3'
services:
redis:
image: "redis:latest"
ports:
- "6379:6379"
web:
build: .
ports:
- "8081:6379"
environment:
REDIS_HOST: 0.0.0.0
REDIS_PORT: 6379
REDIS_PASSWORD: ""
depends_on:
- redis
Dockerfile
FROM python:3-onbuild
COPY requirements.txt requirements.txt
RUN pip3 install -r requirements.txt
CMD ["python", "main.py"]
My expected results are this:
curl -L http://localhost:8081/ping
pong
curl -L http://localhost:8081/redis-status
{"redis_connectivity": "OK"}

How to interact between multiple docker containers, eg ubuntu container with selenium hub container

I have the following three docker containers
1. Ubuntu Container with Mono that has selenium scripts(DLL)
2. Selenium Hub Container
3. Selenium Chrome Node Container
when I build the Docker Compose File, All three containers are up and running, the Ubuntu container exits after sometime without executing any tests.Any idea on how to implement this?
I am executing the tests in the Ubuntu container using mono and would like to create a docker image once this works. Any explanation or sample code on this would be really great.
I have created a bridge and have assigned static ip to all three containers.
Docker Compose File:
version: '3.7'
services:
seleniumhub:
image: selenium/hub
container_name: hubcontainer
networks:
ynetwork:
ipv4_address: 172.21.0.2
ports:
- "4444:4444"
privileged: true
nodechrome:
image: selenium/node-chrome-debug
container_name: chromecontainer
volumes:
- /dev/shm:/dev/shm
depends_on:
- seleniumhub
environment:
- HUB_HOST=seleniumhub
- HUB_PORT=4444
- NODE_MAX_INSTANCES=5
- NODE_MAX_SESSION=5
- START_XVFB=false
networks:
ynetwork:
ipv4_address: 172.21.0.10
Mytests:
container_name: Myubuntutests
depends_on:
- seleniumhub
- nodechrome
networks:
ynetwork:
ipv4_address: 172.21.0.11
build:
context: .
dockerfile: ubuntu.Dockerfile
networks:
ynetwork:
name: ytestsnetwork
driver: bridge
ipam:
config:
- subnet: 172.21.0.0/16
Docker File ubuntu.Dockerfile
FROM ubuntu
COPY /bin/Debug/ /MyTests
ENV DEBIAN_FRONTEND=noninteractive
ENV TZ=Asia/Tokyo
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone && apt-get update && apt-get clean && apt-get install -y wget && apt-get install -y curl && apt-get install -y nuget && apt-get install -y mono-complete && apt-get update && nuget update -self && nuget install testrunner
WORKDIR "/MyTests"
ENTRYPOINT mono /TestRunner.1.8.0/tools/testrunner.exe MyTests.dll
Docker Compose commands used (tried):
docker-compose up --build
docker-compose up --build -d
I expect the Docker Compose to Build all three containers and execute the tests and exit once done

Make kubectl work in gitlab ci

I am searching for a way to use kubectl in gitlab.
So far I have the following script:
deploy_to_dev:
stage: deploy
image: docker:dind
environment:
name: dev
script:
- mkdir -p $HOME/.kube
- echo $KUBE_CONFIG | base64 -d > $HOME/.kube/config
- kubectl config view
only:
- develop
But it says that gitlab does not know kubectl. So can you point me in the right direction.
You are using docker:dindimage which does not have the kubectl binary, you should bring your own image with the binary or download it in the process
deploy_to_dev:
stage: deploy
image: alpine:3.7
environment:
name: dev
script:
- apk update && apk add --no-cache curl
- curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
- chmod +x ./kubectl && mv ./kubectl /usr/local/bin/kubectl
- mkdir -p $HOME/.kube
- echo -n $KUBE_CONFIG | base64 -d > $HOME/.kube/config
- kubectl config view
only:
- develop
Use image google/cloud-sdk which has a preinstalled installation of gcloud and kubectl.
build:
stage: build
image: google/cloud-sdk
services:
- docker:dind
script:
# Make gcloud available
- source /root/.bashrc