I try to install coreos on hyper-v on windows server 2008 r2.
I set up virtual machine, give it an coreos.iso, then wget my cloud-config.yaml
Then I try to sudo coreos-install -d /dev/sda -c cloud-config.yaml and it says
Checking availability of "local-file"
Fetching user-data from datasource of type "local-file"
And... that's all, it does no more
Here's my cloud-config.yaml
#cloud-config
hostname: dockerhost
coreos:
units:
- name: etcd.service
command: start
- name: fleet.service
command: start
users:
- name: core
ssh-authorized-keys:
- ssh-rsa somesshkey
- groups:
- sudo
- docker
FIY i'm using this tutorial
Figure it out
It was our proxy server, which I find out when I use an excellent command bash -x which gave me the full output
Command was proposed by #BrianReadbeard
Related
I am attempting to deploy a test machine in my lab through the MAAS cli. The machine will go through the deployment, however the Cloud-Init User-Data does not fire on boot.
[user-data]
#cloud-config
user: terminal
password:
chpasswd: {expire: False}
ssh_pwauth: True
package_update: true
package_upgrade: true
runcmd:
- 'curl -L https://bootstrap.saltstack.com -o install_salt.sh'
- 'sh install_salt.sh -A 192.168.1.155'
- 'apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 78BD65473CB3BD13'
[CLI CMDS]
user_data=$(base64 -w0 /home/aweare/cloud-init/user-data)
maas aweare machine deploy mktpfp user_data=$user_data
I was using the following as a resource:
https://discourse.maas.io/t/customizing-maas-deployments-with-cloud-init/165
[Update]
I had installed MAAS using SNAP. After installing via Packages it works as expected. Thank you all for viewing
I'm running Ubuntu 20.04 within WSL2 on Windows 10.
I installed podman.
>podman -v
podman version 3.
I tried starting a container with
podman run --name some-redis -d -p 6379:6379 redis
The container is starting. No errors in the log.
If I tried
redis-cli
From Ubuntu it's working.
From dos/powershell it is not working
rdcli -h localhost
localhost:6379> (error) Redis connection to localhost:6379 failed - connect ECONNREFUSED 127.0.0.1:6379
And also it is not working with my SpringBoot application.
I'm also using a portainer container with port mapping 9000:9000 and I can access it from ubuntu, dos, powershell.
So what's the problem with redis. Is it coming from redis or from wsl2/podman ?
What can I do.
ps : The same container on the same machine with docker desktop was working fine.
You probably run into this WSL2 issue: https://github.com/microsoft/WSL/issues/4851
Solution:
option 1: use [::1]:6379 instead of localhost:6379 from Windows side
option 2: use -p 127.0.0.1:6379:6379 instead of -p 6379:6379 with podman run.
I am trying to write a YAML pipeline script to deploy files that have been altered from my bitbucket repository to my remote server using ssh keys. The document that I have in place at the moment was copied from bitbucket itself and has errors:
pipelines:
default:
- step:
name: Deploy to test
deployment: test
script:
- pipe: atlassian/sftp-deploy:0.3.1
- variables:
USER: $USER
SERVER: $SERVER
REMOTE_PATH: $REMOTE_PATH
LOCAL_PATH: $LOCAL_PATH
I am getting the following error
Configuration error
There is an error in your bitbucket-pipelines.yml at [pipelines > default > 0 > step > script > 1]. To be precise: Missing or empty command string. Each item in this list should either be a single command string or a map defining a pipe invocation.
My ssh public and private keys are setup in bitbucket along with the fingerprint and host. The variables have also been setup.
How do I go about setting up my YAML deploy script to connect to my remote server via ssh and transfer the files?
Try to update the variables section become:
- variables:
- USER: $USER
- SERVER: $SERVER
- REMOTE_PATH: $REMOTE_PATH
- LOCAL_PATH: $LOCAL_PATH
Here is am example about how to set variables: https://confluence.atlassian.com/bitbucket/configure-bitbucket-pipelines-yml-792298910.html#Configurebitbucket-pipelines.yml-ci_variablesvariables
Your directive - step has to be intended.
I have bitbucket-pipelines.yml like that (using rsync instead of ssh):
# This is a sample build configuration for PHP.
# Check our guides at https://confluence.atlassian.com/x/e8YWN for more examples.
# Only use spaces to indent your .yml configuration.
# -----
# You can specify a custom docker image from Docker Hub as your build environment.
image: php:7.2.1-fpm
pipelines:
default:
- step:
script:
- apt-get update
- apt-get install zip -y
- apt-get install unzip -y
- apt-get install libgmp3-dev -y
- curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
- composer install
- cp .env.example .env
#- vendor/bin/phpunit
- pipe: atlassian/rsync-deploy:0.2.0
variables:
USER: $DEPLOY_USER
SERVER: $DEPLOY_SERVER
REMOTE_PATH: $DEPLOY_PATH
LOCAL_PATH: '.'
I suggest to use their online editor in repository for editing bitbucket-pipelines.yml, it checks all formal yml structure and you can't commit invalid file.
Even if you check file on some other yaml editor, it may look fine, but not necessary according to bitbucket specification. Their online editor does fine job.
Also, I suggest to visit their community on atlasian community as it's very active, sometimes their staff members are providing answers.
However, I struggle with plenty dependencies needed to run tests properly. (actual bitbucket-pipelines.yml is becoming bigger and bigger).
Maybe there is some nicely prepared Docker image for this job.
I have run the aerospike server inside docker container using below command.
$ docker run -d -p 3000:3000 -p 3001:3001 -p 3002:3002 -p 3003:3003 -p 8081:8081 --name aerospike aerospike/aerospike-server
89b29f48c6bce29045ea0d9b033cd152956af6d7d76a9f8ec650067350cbc906
It is running succesfully. I verified it using the below command.
$ docker ps
CONTAINER ID IMAGE COMMAND
CREATED STATUS PORTS
NAMES
89b29f48c6bc aerospike/aerospike-server "/entrypoint.sh asd"
About a minute ago Up About a minute 0.0.0.0:3000-3003->3000-3003/tcp, 0.0.0.0:8081->8081/tcp aerospike
I'm able to successfully connect it with aql.
$ aql
Aerospike Query Client
Version 3.13.0.1
C Client Version 4.1.6
Copyright 2012-2016 Aerospike. All rights reserved.
aql>
But when I launch the AMC for aerospike server in docker, it is hanging and it is not displaying any data. I've attached the screenshot.
Did I miss any configuration. Why it is not loading any data?
You can try the following:
version: "3.9"
services:
aerospike:
image: "aerospike:ce-6.0.0.1"
environment:
NAMESPACE: testns
ports:
- "3000:3000"
- "3001:3001"
- "3002:3002"
amc:
image: "aerospike/amc"
links:
- "aerospike:aerospike"
ports:
- "8081:8081"
Then go to http://localhost:8081 and enter in the connect window "aerospike:3000"
I need to know how to make couchbase 3.0.+ work in a Travis context.
I have been able to start a Couchbase 2.0 on a Travis context but the 3.0.+ does not seem to actually start (the service daemon on linux says it is started but netstat does not find the web console on port 8091 but the bucket interface is running on 8092, the java-sdk cannot use it).
Here is the script I have tried using in my .travis.yml
before_install:
- sudo wget -O/etc/apt/sources.list.d/couchbase.list http://packages.couchbase.com/ubuntu/couchbase-ubuntu1204.list
- sudo wget http://packages.couchbase.com/ubuntu/couchbase.key && sudo cat couchbase.key
| sudo apt-key add -
- sudo apt-get update
- sudo apt-get install libcouchbase2 libcouchbase-dev
- sudo wget http://packages.couchbase.com/releases/3.0.2/couchbase-server-enterprise_3.0.2-ubuntu12.04_amd64.deb
- sudo dpkg -i couchbase-server-enterprise_3.0.2-ubuntu12.04_amd64.deb
- sudo service couchbase-server restart
- /opt/couchbase/bin/couchbase-cli cluster-init -c 127.0.0.1:8091 --cluster-username=Administrator --cluster-password=password --cluster-ramsize=512
PS: I know Travis instances are only 3GB but the couchbase doc do mention that it can run on 1GB ... I have not been able to find instructions on how to achieve that.