Problems with cloud-init to use gpt instead mbr - resize

I downloaded iso "CentOS-7-x86_64-GenericCloud-2009.qcow2" for create a template of Centos 7 for kvm containers Proxmox.
All is ok. But if I assign a disk with more than 2TB when I go to clone the container cloud-init resize the disk to the limit of 2TB because the disk type is mbr instead of gpt.
This is my /etc/cloud/cloud.cfg
users:
- default
disable_root: 0
ssh_pwauth: 0
#mount_default_fields: [~, ~, 'auto', 'defaults,nofail,x-systemd.requires=cloud-init.service', '0', '2']
#resize_rootfs_tmp: /dev
resize_rootfs: false
disk_setup:
/dev/sda:
table_type: gpt
layout: True
overwrite: True
fs_setup:
- label: None,
filesystem: ext4
device: /dev/sda
partition: sda1
ssh_deletekeys: 1
ssh_genkeytypes: ~
syslog_fix_perms: ~
disable_vmware_customization: false
cloud_init_modules:
- disk_setup
- migrator
- bootcmd
- write-files
- growpart
- resizefs
- set_hostname
- update_hostname
- update_etc_hosts
- rsyslog
- users-groups
- ssh
cloud_config_modules:
- mounts
- locale
- set-passwords
- rh_subscription
- yum-add-repo
- package-update-upgrade-install
- timezone
- puppet
- chef
- salt-minion
- mcollective
- disable-ec2-metadata
- runcmd
cloud_final_modules:
- rightscale_userdata
- scripts-per-once
- scripts-per-boot
- scripts-per-instance
- scripts-user
- ssh-authkey-fingerprints
- keys-to-console
- phone-home
- final-message
- power-state-change
system_info:
default_user:
name: root
lock_passwd: false
gecos: Cloud User
groups: [adm, systemd-journal]
sudo: ["ALL=(ALL) NOPASSWD:ALL"]
shell: /bin/bash
distro: rhel
paths:
cloud_dir: /var/lib/cloud
templates_dir: /etc/cloud/templates
ssh_svcname: sshd
When I starts the container cloned this ignore my conf and resize the partition / to 2TB with mbr.
Can you help me please?

Related

Mercure & Event Source Polyfill - HEARTBEAT_INTERVAL not working on docker-compose

i use that server:
docker-compose
mercure:
image: dunglas/mercure
restart: unless-stopped
volumes:
- mercure_data:/data
- mercure_config:/config
environment:
- HEARTBEAT_INTERVAL=15s
- SERVER_NAME= :80
- PUBLISH_ALLOWED_ORIGINS=*
- MERCURE_EXTRA_DIRECTIVES= cors_origins *
in Mercure & Event Source Polyfill :Error : No activity within 45000 milliseconds. 2 chars received. Reconnecting
i found HEARTBEAT_INTERVAL but how to add it on docker-compose ?

Redis cluster HA not working in kubernetes

I deployed https://github.com/bitnami/charts/tree/master/bitnami/redis-cluster
with following command
helm install redis-cluster bitnami/redis-cluster --create-namespace -n redis -f redis-values.yaml
redis-values.yaml
cluster:
init: true
nodes: 6
replicas: 1
podDisruptionBudget:
minAvailable: 2
persistence:
size: 1Gi
password: "redis#pass"
redis:
configmap: |-
maxmemory 600mb
maxmemory-policy allkeys-lru
maxclients 40000
cluster-require-full-coverage no
cluster-allow-reads-when-down yes
sysctlImage:
enabled: true
mountHostSys: true
command:
- /bin/sh
- -c
- |-
insta
sysctl -w net.core.somaxconn=10000
echo never > /host-sys/kernel/mm/transparent_hugepage/enabled
# echo never > /host-sys/kernel/mm/transparent_hugepage/defrag
metrics:
enabled: true
Now cluster working fine but issue only that if i will delete any pod then redis go down i start getting error for redis.
Here is my config for quarkus to connect:
quarkus.redis.hosts=redis://redis-cluster.redis.svc.local:6379
quarkus.redis.master-name=redis-cluster
quarkus.redis.password=redis#pass
quarkus.redis.client-type=cluster
Don't connect with service but use nodes change from
quarkus.redis.hosts=redis://redis-cluster:6379
To
quarkus.redis.hosts=redis://redis-cluster-0.redis-cluster-headless.redis.svc.cluster.local:6379,redis://redis-cluster-1.redis-cluster-headless.redis.svc.cluster.local:6379,redis://redis-cluster-2.redis-cluster-headless.redis.svc.cluster.local:6379,redis://redis-cluster-3.redis-cluster-headless.redis.svc.cluster.local:6379,redis://redis-cluster-4.redis-cluster-headless.redis.svc.cluster.local:6379,redis://redis-cluster-5.redis-cluster-headless.redis.svc.cluster.local:6379
Format for host is following:
POD-NAME.HEADLESS-SVC-NAME.NAMESPACE.svc.cluster.local:PORT

Slow apache + mysql inside of Docker

I have a problem with performance of my Dockerized app. I have Windows OS. When I run my app using xampp it takes the page ~1 second to load. When I run it inside of Docker it takes ~5 seconds for the page to load. I tried:
1. Docker
2. Docker Toolbox (which creates VirtualBox linux machine and runs Docker inside of it)
Result are the same. Here is my Docker-compose file:
version: '3'
networks:
default:
driver: bridge
services:
webserver:
build: ./docker/webserver
image: yiisoftware/yii2-php:7.3-apache
ports:
- "80:80"
- "443:443"
networks:
- default
volumes:
- /aaa:/var/www/html
links:
- db:mysql
environment:
MYSQL_PORT_3306_TCP_ADDR: db
db:
image: mysql:5.7
ports:
- "3306:3306"
networks:
- default
volumes:
- ./db:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=pass
- MYSQL_DATABASE=aaa
Can anybody give me a hint how to fix this? Or this is a regular behavior on Windows pc? Thank you.
The reason was that there was no APCU inside of the container. And without the cache code was 20 times slowlier. Always check if you need all necessary libs and modules inside of your containers!

gitlab-runner, stuck jobs in creating mode (500 internal server error)

I have a problem that I can not solve alone.
I spent the day on it and I tried to find a solution by myself before creating this ticket.
Context:
I use the latest version of Gitlab proposed: sameersbn / gitlab: 11.5.1
I have a runner launched into a docker container: gitlab / gitlab-runner: alpine
I use Traefik
Everything is started thanks to docker-compose.
Steps:
I launch all of my containers
I record a runner (command visible below)
I notice in the admin that my runners are well recorded in the gitlab: / admin / runners
I run a pipeline, and the job is blocked: /
I have tried everything:
  - update the gitlab
  - update the runner and use a previous version
  - remove the runner from the gitlab network
  - ...
Details:
My Traefik docker-compose.yml:
version: '2'
services:
proxy:
image: traefik:alpine
container_name: traefik
networks:
- traefik
ports:
- "80:80"
- "8080:8080"
- "443:443"
volumes:
- /data/traefik/traefik.toml:/etc/traefik/traefik.toml
- /data/traefik/acme:/etc/traefik/acme
- /var/run/docker.sock:/var/run/docker.sock
restart: unless-stopped
labels:
- "traefik.frontend.rule=Host:traefik.mydomain.com"
- "traefik.port=8080"
- "traefik.backend=traefik"
- "traefik.frontend.entryPoints=http,https"
portainer:
image: portainer/portainer
container_name: portainer
networks:
- traefik
labels:
- "traefik.frontend.rule=Host:portainer.mydomain.com"
- "traefik.port=9000"
- "traefik.backend=portainer"
- "traefik.frontend.entryPoints=http,https"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
restart: unless-stopped
networks:
traefik:
external:
name: traefik
My Gitlab docker-compose.yml:
version: '2'
services:
redis:
restart: always
image: sameersbn/redis:4.0.9-1
container_name: gitlab-redis
command:
- --loglevel warning
networks:
- gitlab
volumes:
- /data/gitlab/redis:/var/lib/redis:Z
labels:
- "traefik.enable=false"
postgresql:
restart: always
image: sameersbn/postgresql:10
container_name: gitlab-postgresql
networks:
- gitlab
volumes:
- /data/gitlab/postgresql:/var/lib/postgresql:Z
environment:
- DB_USER=gitlab
- DB_PASS=password
- DB_NAME=gitlabhq_production
- DB_EXTENSION=pg_trgm
labels:
- "traefik.enable=false"
registry:
image: registry:2
container_name: gitlab-registry
restart: always
expose:
- "5000"
ports:
- "5000:5000"
networks:
- gitlab
- traefik
volumes:
- /data/gitlab/registry:/registry
- /data/gitlab/certs:/certs
labels:
- traefik.port=5000
- traefik.frontend.rule=Host:registry.mydomain.com
- traefik.frontend.auth.basic=mydomain:fd9ef338f7de0f196c5409a668102b9a
environment:
- REGISTRY_LOG_LEVEL=error
- REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY=/registry
- REGISTRY_AUTH_TOKEN_REALM=https://gitlab.mydomain.com/jwt/auth
- REGISTRY_AUTH_TOKEN_SERVICE=container_registry
- REGISTRY_AUTH_TOKEN_ISSUER=gitlab-issuer
- REGISTRY_AUTH_TOKEN_ROOTCERTBUNDLE=/certs/registry.crt
- REGISTRY_STORAGE_DELETE_ENABLED=true
gitlab-runner:
image: gitlab/gitlab-runner:alpine
container_name: gitlab-runner
restart: always
depends_on:
- gitlab
networks:
- gitlab
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /data/gitlab-runner:/etc/gitlab-runner:Z
environment:
- CI_SERVER_URL=https://gitlab.mydomain.com/
- REGISTRATION_TOKEN=FzZtgyN1cAMzoYne89ts
labels:
- "traefik.enable=false"
gitlab:
restart: always
image: sameersbn/gitlab:11.5.1
container_name: gitlab-ce
depends_on:
- redis
- postgresql
- registry
ports:
- "10080:80"
- "10022:22"
networks:
- gitlab
- traefik
volumes:
- /data/gitlab/gitlab:/home/git/data:Z
- /data/gitlab/certs:/certs
environment:
- DEBUG=false
- DB_ADAPTER=postgresql
- DB_HOST=postgresql
- DB_PORT=5432
- DB_USER=gitlab
- DB_PASS=password
- DB_NAME=gitlabhq_production
- REDIS_HOST=redis
- REDIS_PORT=6379
- TZ=Europe/Paris
- GITLAB_TIMEZONE=Paris
- GITLAB_HTTPS=true
- SSL_SELF_SIGNED=false
- GITLAB_HOST=gitlab.mydomain.com
- GITLAB_PORT=
- GITLAB_SSH_PORT=10022
- GITLAB_RELATIVE_URL_ROOT=
- GITLAB_SECRETS_DB_KEY_BASE=w58HODDUerP7YOuAbt9heD9j6s80P5A8POUdsd4wHeh7tLU8wdSG0noq2LsRnvqsff9btHJDovejeTMWflg78tvKqT7y9omqVTvh
- GITLAB_SECRETS_SECRET_KEY_BASE=w58HODDUerP7YOuAbt9heD9j6s80P5A8POUdsd4wHeh7tLU8wdSG0noq2LsRnvqsff9btHJDovejeTMWflg78tvKqT7y9omqVTvh
- GITLAB_SECRETS_OTP_KEY_BASE=w58HODDUerP7YOuAbt9heD9j6s80P5A8POUdsd4wHeh7tLU8wdSG0noq2LsRnvqsff9btHJDovejeTMWflg78tvKqT7y9omqVTvh
- GITLAB_ROOT_PASSWORD=
- GITLAB_ROOT_EMAIL=
- GITLAB_NOTIFY_ON_BROKEN_BUILDS=true
- GITLAB_NOTIFY_PUSHER=false
- GITLAB_EMAIL=notifications#example.com
- GITLAB_EMAIL_REPLY_TO=noreply#example.com
- GITLAB_INCOMING_EMAIL_ADDRESS=reply#example.com
- GITLAB_BACKUP_SCHEDULE=daily
- GITLAB_BACKUP_TIME=01:00
- SMTP_ENABLED=false
- SMTP_DOMAIN=www.example.com
- SMTP_HOST=smtp.gmail.com
- SMTP_PORT=587
- SMTP_USER=mailer#example.com
- SMTP_PASS=password
- SMTP_STARTTLS=true
- SMTP_AUTHENTICATION=login
- IMAP_ENABLED=false
- IMAP_HOST=imap.gmail.com
- IMAP_PORT=993
- IMAP_USER=mailer#example.com
- IMAP_PASS=password
- IMAP_SSL=true
- IMAP_STARTTLS=false
- OAUTH_ENABLED=false
- OAUTH_AUTO_SIGN_IN_WITH_PROVIDER=
- OAUTH_ALLOW_SSO=
- OAUTH_BLOCK_AUTO_CREATED_USERS=true
- OAUTH_AUTO_LINK_LDAP_USER=false
- OAUTH_AUTO_LINK_SAML_USER=false
- OAUTH_EXTERNAL_PROVIDERS=
- OAUTH_CAS3_LABEL=cas3
- OAUTH_CAS3_SERVER=
- OAUTH_CAS3_DISABLE_SSL_VERIFICATION=false
- OAUTH_CAS3_LOGIN_URL=/cas/login
- OAUTH_CAS3_VALIDATE_URL=/cas/p3/serviceValidate
- OAUTH_CAS3_LOGOUT_URL=/cas/logout
- OAUTH_GOOGLE_API_KEY=
- OAUTH_GOOGLE_APP_SECRET=
- OAUTH_GOOGLE_RESTRICT_DOMAIN=
- OAUTH_FACEBOOK_API_KEY=
- OAUTH_FACEBOOK_APP_SECRET=
- OAUTH_TWITTER_API_KEY=
- OAUTH_TWITTER_APP_SECRET=
- OAUTH_GITHUB_API_KEY=
- OAUTH_GITHUB_APP_SECRET=
- OAUTH_GITHUB_URL=
- OAUTH_GITHUB_VERIFY_SSL=
- OAUTH_GITLAB_API_KEY=
- OAUTH_GITLAB_APP_SECRET=
- OAUTH_BITBUCKET_API_KEY=
- OAUTH_BITBUCKET_APP_SECRET=
- OAUTH_SAML_ASSERTION_CONSUMER_SERVICE_URL=
- OAUTH_SAML_IDP_CERT_FINGERPRINT=
- OAUTH_SAML_IDP_SSO_TARGET_URL=
- OAUTH_SAML_ISSUER=
- OAUTH_SAML_LABEL="Our SAML Provider"
- OAUTH_SAML_NAME_IDENTIFIER_FORMAT=urn:oasis:names:tc:SAML:2.0:nameid-format:transient
- OAUTH_SAML_GROUPS_ATTRIBUTE=
- OAUTH_SAML_EXTERNAL_GROUPS=
- OAUTH_SAML_ATTRIBUTE_STATEMENTS_EMAIL=
- OAUTH_SAML_ATTRIBUTE_STATEMENTS_NAME=
- OAUTH_SAML_ATTRIBUTE_STATEMENTS_FIRST_NAME=
- OAUTH_SAML_ATTRIBUTE_STATEMENTS_LAST_NAME=
- OAUTH_CROWD_SERVER_URL=
- OAUTH_CROWD_APP_NAME=
- OAUTH_CROWD_APP_PASSWORD=
- OAUTH_AUTH0_CLIENT_ID=
- OAUTH_AUTH0_CLIENT_SECRET=
- OAUTH_AUTH0_DOMAIN=
- OAUTH_AZURE_API_KEY=
- OAUTH_AZURE_API_SECRET=
- OAUTH_AZURE_TENANT_ID=
- GITLAB_REGISTRY_ENABLED=true
- GITLAB_REGISTRY_HOST=registry.mydomain.com
- GITLAB_REGISTRY_API_URL=http://localhost:5000
- GITLAB_REGISTRY_KEY_PATH=/certs/registry.key
- GITLAB_REGISTRY_ISSUER=gitlab-issuer
labels:
- "traefik.frontend.rule=Host:gitlab.mydomain.com"
- "traefik.port=80"
- "traefik.backend=gitlab"
- "traefik.frontend.entryPoints=http,https"
- "traefik.docker.network=traefik"
networks:
gitlab:
driver: bridge
traefik:
external:
name: traefik
Command for register my runner:
docker exec -it gitlab-runner gitlab-runner register \
--non-interactive \
--name "Doker runner dind 1" \
--url "https://gitlab.mydomain.com/" \
--registration-token "FzZtgyN1cAMzoYne89ts" \
--env "COMPOSER_CACHE_DIR=/cache" \
--env "GIT_SSL_NO_VERIFY=true" \
--env "DOCKER_DRIVER=overlay2" \
--executor "docker" \
--docker-image docker:stable-dind \
--docker-privileged="true" \
--docker-volumes /var/run/docker.sock:/var/run/docker.sock \
--docker-volumes /cache \
--tag-list "docker,dind" \
--run-untagged \
--locked="false"
According to my notes a 500 appearing in the Gitlab-Runner doesn't indicate much. It simply echoes the error it receives from workhorse which mangles the real message, some variant of 4XX from either gitaly or linguist into a 500. The first log to check is ones production.log but this seems to only log the 500 errors emitted by workhorse, so you have to got a level deeper and scan your workhorse.log.
Gitaly
Check the workhorse.log for a version mismatch between gitaly and workhorse. As I recall it was critical that both applications have comparable version numbers, there was a table one could check, as this dictated which protocols they understood.
Linguist
This was a really obscure issue I encountered. Essentially the version of Ruby used to run Gitaly and the version of Ruby used by Gitaly to run gitaly-ruby, the sub-processes it spawns internally, were different. This is apparently indicated by the cryptic messages
time="2017-12-04T18:11:34+02:00" level=fatal msg="load config" config_path=/etc/gitaly/config.toml error="load linguist colors: exit status 1; stderr: \"/usr/lib64/ruby/gems/2.3.0/gems/bundler-1.13.7/lib/bundler/definition.rb:179:in `rescue in specs': Your bundle is locked to rake (12.1.0), but that version could not be found in any of the sources listed in your Gemfile. If you haven't changed sources, that means the author of rake (12.1.0) has removed it. You'll need to update your bundle to a different version of rake (12.1.0) that hasn't been removed in order to install. (Bundler::GemNotFound)\\n\\tfrom /usr/lib64/ruby/gems/2.3.0/gems/bundler-1.13.7/lib/bundler/definition.rb:173:in `specs'\\n\\tfrom /usr/lib64/ruby/gems/2.3.0/gems/bundler-1.13.7/lib/bundler/definition.rb:233:in `specs_for'\\n\\tfrom /usr/lib64/ruby/gems/2.3.0/gems/bundler-1.13.7/lib/bundler/definition.rb:222:in `requested_specs'\\n\\tfrom /usr/lib64/ruby/gems/2.3.0/gems/bundler-1.13.7/lib/bundler/runtime.rb:118:in `block in definition_method'\\n\\tfrom /usr/lib64/ruby/gems/2.3.0/gems/bundler-1.13.7/lib/bundler/runtime.rb:19:in `setup'\\n\\tfrom /usr/lib64/ruby/gems/2.3.0/gems/bundler-1.13.7/lib/bundler.rb:99:in `setup'\\n\\tfrom /usr/lib64/ruby/gems/2.3.0/gems/bundler-1.13.7/lib/bundler/setup.rb:20:in `<top (required)>'\\n\\tfrom /usr/lib64/ruby/site_ruby/2.3.0/rubygems/core_ext/kernel_require.rb:55:in `require'\\n\\tfrom /usr/lib64/ruby/site_ruby/2.3.0/rubygems/core_ext/kernel_require.rb:55:in `require'\\n\""
time="2017-12-04T18:17:54+02:00" level=info msg="Starting Gitaly" version="Gitaly, version 0.52.0, built 20171204.135804"
Note :
Please bear in mind that my notes are particular to Gentoo, a different version of GitlabHQ and may or may not be applicable to your situation accordingly. Please update your question as you find out more information since my notes may have further information that is relevant to your problem

Update shared volume from data container

Hi guys i'm in this situation, i'd like to deploy the changes in my source code by rebuilding the data container which contains a COPY command for transfer the source in the volume. However when i rebuild the data image and re-run docker-compose i stuck with the old code and the only way to update everything is to remove the webroot volume and recreate it.
Where is the mistake??
server:
build: ./docker/apache
image: server:1.3.16
restart: always
links:
- fpm
ports:
- 80:80 # HTTP
- 443:443 # HTTPS
volumes:
- webroot:/var/www/html:ro
fpm:
build: ./docker/php
image: fpm:1.0
restart: always
links:
- database
volumes:
- webroot:/var/www/html
data:
build:
context: .
dockerfile: dataDockerFile
image: smanapp/data:1.0.0
volumes:
- webroot:/var/www/html
volumes:
webroot:
The named volume webroot is meant to persist data across container restart/rebuilds. The only time the data in the volume is updated from an image is when the volume is created, and the contents of the directory in the image are copied in.
It looks like you mean to use volumes_from which is how you get container to mount volumes defined on data. This is the original "data container" method of sharing data that volumes were designed to replace.
version: "2.1"
services:
server:
image: busybox
volumes_from:
- data
command: ls -l /var/www/html
fpm:
image: busybox
volumes_from:
- data
command: ls -l /var/www/html
data:
build: .
image: dply/data
volumes:
- /var/www/html
Note that this has been replaced in version 3 of the compose file so you may need to stick with recreating the volume if you want to use newer features.