I am attempting to deploy a test machine in my lab through the MAAS cli. The machine will go through the deployment, however the Cloud-Init User-Data does not fire on boot.
[user-data]
#cloud-config
user: terminal
password:
chpasswd: {expire: False}
ssh_pwauth: True
package_update: true
package_upgrade: true
runcmd:
- 'curl -L https://bootstrap.saltstack.com -o install_salt.sh'
- 'sh install_salt.sh -A 192.168.1.155'
- 'apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 78BD65473CB3BD13'
[CLI CMDS]
user_data=$(base64 -w0 /home/aweare/cloud-init/user-data)
maas aweare machine deploy mktpfp user_data=$user_data
I was using the following as a resource:
https://discourse.maas.io/t/customizing-maas-deployments-with-cloud-init/165
[Update]
I had installed MAAS using SNAP. After installing via Packages it works as expected. Thank you all for viewing
Related
I'm running Ubuntu 20.04 within WSL2 on Windows 10.
I installed podman.
>podman -v
podman version 3.
I tried starting a container with
podman run --name some-redis -d -p 6379:6379 redis
The container is starting. No errors in the log.
If I tried
redis-cli
From Ubuntu it's working.
From dos/powershell it is not working
rdcli -h localhost
localhost:6379> (error) Redis connection to localhost:6379 failed - connect ECONNREFUSED 127.0.0.1:6379
And also it is not working with my SpringBoot application.
I'm also using a portainer container with port mapping 9000:9000 and I can access it from ubuntu, dos, powershell.
So what's the problem with redis. Is it coming from redis or from wsl2/podman ?
What can I do.
ps : The same container on the same machine with docker desktop was working fine.
You probably run into this WSL2 issue: https://github.com/microsoft/WSL/issues/4851
Solution:
option 1: use [::1]:6379 instead of localhost:6379 from Windows side
option 2: use -p 127.0.0.1:6379:6379 instead of -p 6379:6379 with podman run.
I'm sure this is not the first question for BitBucket Pipeline and Digital Ocean, but I have gone through several similar posts without any luck.
pipelines:
default:
- step:
name: SSH to Digital Ocean and update docker image
script:
- ssh -i ~/.ssh/config root#xxx.xxx.xxx.xxx
- docker rm -f mycontainer
- docker image rm -f myrepo/imagename:tag
- docker pull myrepo/imagename:tag
- docker run --name mycontainer -p 12345:80 -d=true --restart=always myrepo/imagename:tag
services:
- docker
Here is the SSH Key in my BitBucket repository
Here is what the BitBucket Pipeline shows to me:
How can I resolve this?
This is not a key problem - it's that the Pipelines container does not act as a normal terminal, but ssh expects a terminal under normal operation. You should be able to pass the command(s) to be run as arguments to the SSH command: ssh -i /path/to/key user#host "docker rm -f mycontainer && docker image rm -f myrepo/imagename:tag" etc.
I have a gitlab installation and I am trying to setup a gitlab-runner using a docker executor. All ok until tests start running and then since my projects are private and they have no http access enabled, they fail at clone time with:
Running with gitlab-runner 10.0.2 (a9a76a50)
on Jupiter-docker (5f4ed288)
Using Docker executor with image fedora:26 ...
Using docker image sha256:1f082f05a7fc20f99a4ccffc0484f45e6227984940f2c57d8617187b44fd5c46 for predefined container...
Pulling docker image fedora:26 ...
Using docker image fedora:26 ID=sha256:b0b140824a486ccc0f7968f3c6ceb6982b4b77e82ef8b4faaf2806049fc266df for build container...
Running on runner-5f4ed288-project-5-concurrent-0 via 2705e39bc3d7...
Cloning repository...
Cloning into '/builds/pmatos/tob'...
remote: Git access over HTTP is not allowed
fatal: unable to access 'https://gitlab.linki.tools/pmatos/tob.git': The requested URL returned error: 403
ERROR: Job failed: exit code 1
I have looked into https://docs.gitlab.com/ee/ci/ssh_keys/README.html
and decided to give it a try so my .gitlab-ci.yml starts with:
image: fedora:26
before_script:
# Install ssh-agent if not already installed, it is required by Docker.
# (change apt-get to yum if you use a CentOS-based image)
- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
# Run ssh-agent (inside the build environment)
- eval $(ssh-agent -s)
# Add the SSH key stored in SSH_PRIVATE_KEY variable to the agent store
- ssh-add <(echo "$SSH_PRIVATE_KEY")
# For Docker builds disable host key checking. Be aware that by adding that
# you are suspectible to man-in-the-middle attacks.
# WARNING: Use this only with the Docker executor, if you use it with shell
# you will overwrite your user's SSH config.
- mkdir -p ~/.ssh
- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
... JOBS...
I setup the SSH_PRIVATE_KEY correctly, etc but the issue is that the cloning of the project happens before before_script. I then tried to start the container with -v /home/pmatos/gitlab-runner_ssh:/root/.ssh but still the cloning is trying to use HTTP. How can I force the container to clone through ssh?
Due to the way gitlab CI works, CI requires https access to the repository. Therefore if you enable CI, you need to have https repo access enabled as well.
This is however, not an issue privacy wise as making the container https accessible doesn't stop gitlab from checking if you're authorized to access it.
I then tried to start the container with -v /home/pmatos/gitlab-runner_ssh:/root/.ssh but still the cloning is trying to use HTTP
Try at least if possible within your container to add a
git config --global url.ssh://git#.insteadOf https://
(assuming the ssh user is git)
That would make any clone of any https URL use ssh.
Rubymine has options to add remote sdks using Vagrant and SSH, however I decided to go with Docker. I already created a Ruby container, but I don't know how to enable SSH access to it so Rubymine can set it as the remote SDK.
Is it possible?
Tried to follow this article, but the Ruby image doesn't have yum and this package epel-release is for Fedora/RedHat.
Hey are you using this official Ruby docker image?
If so, it's based on Debian and you'll have to use apt-get to install packages.
Here's a handy script for installing openssh-server and configuring a user in a Dockerfile:
FROM ruby:2.1.9
#======================
# Install OpenSSH server (sshd)
#======================
RUN apt-get update -qqy \
&& apt-get -qqy install \
openssh-server \
&& echo "PidFile ${RUN_DIR}/sshd.pid" >> /etc/ssh/sshd_config \
&& sed -i 's|session required pam_loginuid.so|session optional pam_loginuid.so|g' /etc/pam.d/sshd \
&& mkdir -p /var/run/sshd \
&& rm -rf /var/lib/apt/lists/*
# Add user rubymine with password rubymine and give ownership of rubymine home dir
RUN adduser --quiet rubymine \
&& echo "rubymine:rubymine" | chpasswd \
&& chown -R rubymine:rubymine /home/rubymine \
EXPOSE 22
I'm not sure of what are the exact configurations you can perform with Rubymine. But it's possible to open a tty with the container without the need of ssh:
#run it as a daemon
docker run -d --name=myruby ruby:2.19
#connect to it
docker -it exec myruby /bin/bash
UPDATE:
Try setting DOCKER_HOST environment variable to listen on a tcp port:
export DOCKER_HOST='tcp://localhost:2376'
I try to install coreos on hyper-v on windows server 2008 r2.
I set up virtual machine, give it an coreos.iso, then wget my cloud-config.yaml
Then I try to sudo coreos-install -d /dev/sda -c cloud-config.yaml and it says
Checking availability of "local-file"
Fetching user-data from datasource of type "local-file"
And... that's all, it does no more
Here's my cloud-config.yaml
#cloud-config
hostname: dockerhost
coreos:
units:
- name: etcd.service
command: start
- name: fleet.service
command: start
users:
- name: core
ssh-authorized-keys:
- ssh-rsa somesshkey
- groups:
- sudo
- docker
FIY i'm using this tutorial
Figure it out
It was our proxy server, which I find out when I use an excellent command bash -x which gave me the full output
Command was proposed by #BrianReadbeard