When testing with plasmoidviewer, if you edit your svg, the image will not update when you run the second time after editing it.
PlasmaCore.FrameSvgItem {
imagePath: plasmoid.file("", "images/dragarea.svg")
}
You need to delete the cache like so before running plasmoidviewer.
rm ~/.cache/plasma-svgelements-*
plasmoidviewer -a package -l topedge -f horizontal -x 0 -y 0
Related
I found similar case here, that I am using molecule to test my ansible roles, but for some reason it is skipping "creation" part and gives error like:
fatal: [rabbitmq]: UNREACHABLE! => {"changed": false, "msg": "Failed to create temporary directory.In some cases, you may have been able to authenticate and did not have permissions on the target directory. Consider changing the remote tmp path in ansible.cfg to a path rooted in \"/tmp\", for more error information use -vvv. Failed command was: ( umask 77 && mkdir -p \"` echo ~/.ansible/tmp `\"&& mkdir \"` echo ~/.ansible/tmp/ansible-tmp-1638541586.6239848-828-250053975102429 `\" && echo ansible-tmp-1638541586.6239848-828-250053975102429=\"` echo ~/.ansible/tmp/ansible-tmp-1638541586.6239848-828-250053975102429 `\" ), exited with result 1", "unreachable": true}
It is skipping the create process: Skipping, instances already created. However, nothing is running:
name#EEW00438:~/.cache$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
name#EEW00438:~/.cache$
what I tried:
molecule destroy
molecule reset
restart
rm -rf ~/.cache/
changed remote_tmp to /tmp/.ansible/ in /etc/ansible/ansible.cfg
reinstall molecule
This issue is only with one role.
UPDATE:
it is failing on step:
mkdir \"` echo ~/.ansible/tmp/ansible-tmp-1638782939.31706-2913-12516475286623 `\" && echo ansible-tmp-1638782939.31706-2913-12516475286623=
mkdir: cannot create directory ‘"/home/user/.ansible/tmp/ansible-tmp-1638782939.31706-2913-12516475286623"’: No such file or directory
I stumbled upon this issue as well.
When you create the role you need to create it as molecule init role --driver-name docker ns.myrole to enable docker. Be sure to install the docker driver too if you haven't pip install --upgrade molecule-docker
So if you need to tweak the container that runs, edit molecule.yml. It defaults to centos. I switched to ubuntu in there, an created a Dockerfile to provision the container with things that need to exist.
molecule.yml
---
dependency:
name: galaxy
driver:
name: docker
platforms:
- name: instance
image: ubuntu:22.04 # this is required but ignored since I specify a `dockerfile`
pre_build_image: false
dockerfile: Dockerfile
provisioner:
name: ansible
verifier:
name: ansible
For example, Ubuntu 22.04 doesn't use python anymore, so I added an alias at the end of what molecule renders so that Ansible can use python and have it redirect to python3
FROM ubuntu:22.04
RUN if [ $(command -v apt-get) ]; then export DEBIAN_FRONTEND=noninteractive && apt-get update && apt-get install -y python3 sudo bash ca-certificates iproute2 python3-apt aptitude && apt-get clean && rm -rf /var/lib/apt/lists/*; \
elif [ $(command -v dnf) ]; then dnf makecache && dnf --assumeyes install /usr/bin/python3 /usr/bin/python3-config /usr/bin/dnf-3 sudo bash iproute && dnf clean all; \
elif [ $(command -v yum) ]; then yum makecache fast && yum install -y /usr/bin/python /usr/bin/python2-config sudo yum-plugin-ovl bash iproute && sed -i 's/plugins=0/plugins=1/g' /etc/yum.conf && yum clean all; \
elif [ $(command -v zypper) ]; then zypper refresh && zypper install -y python3 sudo bash iproute2 && zypper clean -a; \
elif [ $(command -v apk) ]; then apk update && apk add --no-cache python3 sudo bash ca-certificates; \
elif [ $(command -v xbps-install) ]; then xbps-install -Syu && xbps-install -y python3 sudo bash ca-certificates iproute2 && xbps-remove -O; fi
RUN echo 'alias python=python3' >> ~/.bashrc
It's been years since I last used Molecule, and I must say... it's gone downhill. It used to be easy/clear/direct to get things working. Sigh. I guess I should stick to containers and force the migration off VMs sooner!
The problem may be caused by a Docker context change performed at the start of Docker Desktop. Despite this, Molecule does create a container, but in an inactive context.
At startup, Docker Desktop automatically switches the context from default to desktop-linux [1]. The active context determines which containers are available from CLI.
The context cannot be set in the molecule, i.e. the default context is always used to create containers [2].
$ molecule create --scenario-name test
... # The output with the error is skipped because it duplicates the output from the question
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
$ docker context ls
NAME TYPE DESCRIPTION DOCKER ENDPOINT KUBERNETES ENDPOINT ORCHESTRATOR
default moby Current DOCKER_HOST based configuration unix:///var/run/docker.sock swarm
desktop-linux * moby unix:///home/bkarpov/.docker/desktop/docker.sock
$ docker context use default
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a71bfd28992f geerlingguy/docker-ubuntu2004-ansible "bash -c 'while true…" 5 minutes ago Up 5 minutes some-instance
$ molecule login --scenario-name test
INFO Running test > login
root#some-instance:/#
Solutions
Switch the context back to default manually
docker context use default
This solution is suitable for one-time execution, since the context will need to be switched every time Docker Desktop is started. Docker Desktop service will continue to work using the desktop-linux context.
Issue with the request to add context switching to Docker Desktop - https://github.com/docker/roadmap/issues/47
Stop Docker Desktop
systemctl --user stop docker-desktop
Stopping the Docker Desktop service will automatically switch to the default context.
Set DOCKER_CONTEXT so that Docker Desktop does not change the context in the current shell
export DOCKER_CONTEXT=default
systemctl --user restart docker-desktop
When stopping, the context returns to default, and when starting, it does not switch to desktop-linux.
References
https://docs.docker.com/desktop/install/ubuntu/#launch-docker-desktop
https://github.com/ansible-community/molecule-docker#faq
In the system there is a nodejs, installed through nvm. The command is not running npm.
Console is Oh my zsh
You can use zsh-nvm or enable it yourself by adding following lines to your ~/.zshrc
export NVM_DIR=~/.nvm
[ -s "$NVM_DIR/nvm.sh" ] && . "$NVM_DIR/nvm.sh"
Extra:
For faster shell initialization, I use lazynvm which only loads node when needed
lazynvm() {
unset -f nvm node npm
export NVM_DIR=~/.nvm
[ -s "$NVM_DIR/nvm.sh" ] && . "$NVM_DIR/nvm.sh" # This loads nvm
}
nvm() {
lazynvm
nvm $#
}
node() {
lazynvm
node $#
}
npm() {
lazynvm
npm $#
}
Reference: Lazy load nvm for faster shell start
Switching from Bash to Oh-My-Zsh
If you already have nvm installed and you're switching from bash to oh-my-zsh you can simply open up your .zshrc file and add the nvm plugin that is included with oh-my-zsh:
Open your zsh config file.zshrc in nano with this command: nano ~/.zshrc
Scroll down to where it shows plugins=(git) and add nvm inside the parentheses to make it show as plugins=(git nvm) (separate plugins with spaces)
Press control + O (on macOS), then enter, to save, then press control + X to exit
Then open a new terminal window/tab and enter nvm ls to confirm it works. Note that you must open a new window/tab for your shell to use the newly updated .zshrc config (or enter source ~/.zshrc, etc.)
Source: https://github.com/robbyrussell/oh-my-zsh/tree/master/plugins/nvm
This worked for me on Ubuntu 20.04.
Install or update nvm
wget -qO- https://raw.githubusercontent.com/nvm-sh/nvm/v0.37.2/install.sh | bash
Add in your ~/.zshrc
echo 'export NVM_DIR=~/.nvm' >> ~/.zshrc
echo '[ -s "$NVM_DIR/nvm.sh" ] && . "$NVM_DIR/nvm.sh"' >> ~/.zshrc
Load in the current shell environment
source ~/.zshrc
Check the nvm version
nvm -v
use homebrew to install nvm
brew install nvm
edit your system configuration
vim ~/.zshrc # or vim ~/.bashrc
export NVM_DIR=~/.nvm
esc > :wq
save file
reload the configuration
source $(brew --prefix nvm)/nvm.sh
view nvm version
$ nvm --version
# 0.36.0
enjoy it.
A much easier solution is to use the nvm plugin that is shipped by default:
It also automatically sources nvm, so you don't need to do it manually
in your .zshrc
git clone https://github.com/nvm-sh/nvm.git ~/.nvm
cd ~/.nvm && git checkout v0.35.1 (current latest release)
Add nvm to your ~/.zshrc. Ex: plugins=(... nvm)
I discovered that there is a nvm plug-in shipping with oh-my-zsh (that's different from lukechilds plugin). After short inspection, I think it adds the necessary modifications to .zshrc when loading, so simply adding nvm to the plugins list in .zshrc should work as well (and it does for me).
I did not find any more details on that default nvm plugin via google so I don't know whether this is the "go-to" solution.
Add this code to .zshrc on your user directory
export NVM_DIR="$HOME/.nvm"
[ -s "/usr/local/opt/nvm/nvm.sh" ] && . "/usr/local/opt/nvm/nvm.sh" # This loads nvm
[ -s "/usr/local/opt/nvm/etc/bash_completion.d/nvm" ] && . "/usr/local/opt/nvm/etc/bash_completion.d/nvm" # This loads nvm bash_completion
Then run this code on your terminal:
source ~/.zshrc
With Linux (Ubuntu 20.04, 22.04 and 22.10)
With your favorite editor, you edit ~/.zshrc
nano or vi ~/.zshrc
At the end of the file, you add :
# NVM
export NVM_DIR=~/.nvm
[ -s "$NVM_DIR/nvm.sh" ] && . "$NVM_DIR/nvm.sh"
And then you run :
source ~/.zshrc
I strongly suggest using christophemarois' approach to lazy loading nvm (node, npm and global packages) in order to avoid slow shell starting times:
# Add every binary that requires nvm, npm or node to run to an array of node globals
NODE_GLOBALS=(`find ~/.nvm/versions/node -maxdepth 3 -type l -wholename '*/bin/*' | xargs -n1 basename | sort | uniq`)
NODE_GLOBALS+=("node")
NODE_GLOBALS+=("nvm")
# Lazy-loading nvm + npm on node globals call
load_nvm () {
export NVM_DIR=~/.nvm
[ -s "$(brew --prefix nvm)/nvm.sh" ] && . "$(brew --prefix nvm)/nvm.sh"
}
# Making node global trigger the lazy loading
for cmd in "${NODE_GLOBALS[#]}"; do
eval "${cmd}(){ unset -f ${NODE_GLOBALS}; load_nvm; ${cmd} \$# }"
done
docker started to produce weird bugs when I was using a few simple alpine based containers. Two of these problems are:
rc-update was not found when i was trying to use it
after installing openssh package, there was nothing in /etc/ssh or there was no /etc/init.d/sshd to start/restart the service
To avoid confusion I checked out a widely used container that serves as a simple SSH server. You can do it by executing:
git clone https://github.com/chamunks/alpine-openssh.git
After this go into the alpine-openssh directory and build the container with:
docker build -t alpine-openssh .
Mine produces the following:
Sending build context to Docker daemon 125.4 kB
Step 1 : FROM alpine
---> 4e38e38c8ce0
Step 2 : MAINTAINER Chamunks <Chamunks#gmail.com>
---> Running in c21d3fa28903
---> f32322a2871a
Removing intermediate container c21d3fa28903
Step 3 : COPY sshd_config /etc/ssh/sshd_config
---> 392364fc35ce
Removing intermediate container 4176ae093cb8
Step 4 : ADD https://gist.githubusercontent.com/chamunks/38c807435ffed53583f0/raw/ec868d1b45e248eb517a134b84474133c3e7dc66/gistfile1.txt /data/.ssh/authorized_keys
Downloading [==================================================>] 864 B/864 B
---> c3899b675728
Removing intermediate container f83629b6fa9b
Step 5 : RUN apk add --update openssh && rc-update add sshd && rc-status && touch /run/openrc/softlevel && /etc/init.d/sshd start && /etc/init.d/sshd stop && adduser -D user -h /data/
---> Running in 1d1aad9d1678
fetch http://dl-cdn.alpinelinux.org/alpine/v3.4/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.4/community/x86_64/APKINDEX.tar.gz
(1/3) Installing openssh-client (7.2_p2-r3)
(2/3) Installing openssh-sftp-server (7.2_p2-r3)
(3/3) Installing openssh (7.2_p2-r3)
Executing busybox-1.24.2-r9.trigger
OK: 8 MiB in 14 packages
/bin/sh: rc-update: not found
The command '/bin/sh -c apk add --update openssh && rc-update add sshd && rc-status && touch /run/openrc/softlevel && /etc/init.d/sshd start && /etc/init.d/sshd stop && adduser -D user -h /data/' returned a non-zero code: 127
Notice the /bin/sh: rc-update: not found part. This should work but it doesn't. I restarted my docker service, checked out docker's forums but no similar issue has been reported so far.
Any ideas why does it happen?
The rc-update tool is a part of the openrc package which is not included in the base image.
apk add openrc
I have a docker image where I have put apache. I want it to that when the container starts, apache starts and I can visit the test page. However, the page is not appearing when I try.
This is my current dockerfile:
FROM centos:7
MAINTAINER me <me#me.com>
RUN yum update -y && yum install -y httpd php
RUN (cd /lib/systemd/system/sysinit.target.wants/; for i in *; do [ $i == systemd-tmpfiles-setup.service ] || rm -f $i; done); \
rm -f /lib/systemd/system/multi-user.target.wants/*;\
rm -f /etc/systemd/system/*.wants/*;\
rm -f /lib/systemd/system/local-fs.target.wants/*; \
rm -f /lib/systemd/system/sockets.target.wants/*udev*; \
rm -f /lib/systemd/system/sockets.target.wants/*initctl*; \
rm -f /lib/systemd/system/basic.target.wants/*;\
rm -f /lib/systemd/system/anaconda.target.wants/*;
VOLUME [ "/sys/fs/cgroup" ]
EXPOSE 80
EXPOSE 443
CMD ["/usr/sbin/init"]
CMD ["/usr/sbin/httpd", "-D", "FOREGROUND"]
I am running the container with the command docker run -d -P <container_name>, and when I do docker ps, I see the ports section being populated correctly, with 0.0.0.0:32784->80/tcp, 0.0.0.0:32783->443/tcp as the output.
The url im trying to use to access it is 172.17.0.2:32784.
Any ideas?
Turns out the issue was that I was trying to connect with the docker containers IP, when the IP I shouldve been connecting with the IP of the server that it was hosted on.
Derp.
When deploying my rail 3 app with Capistrano It gets to the step where capistrano executes this command:
* executing "if [ -d /var/www/appname/shared/cached-copy ]; then cd /var/www/dflabs1/shared/cached-copy && git fetch -q origin && git fetch --tags -q origi
n && git reset -q --hard d0a1373a3634935de1a75f377698ba53574fe580 && git clean -q -d -x -f; else git clone -q https://github.com/username/dflabs1.git /va
r/www/appname/shared/cached-copy && cd /var/www/appname/shared/cached-copy && git checkout -q -b deploy d0a1373a3634935de1a75f377698ba53574fe580; fi"
servers: ["11.10.1.162"]
Password:
[11.10.1.162] executing command
** [11.10.1.162 :: out] Username for 'https://github.com':
The problem is that when it outputs "Username for 'https://github.com'" , the cursor jumps to a new line without letting me enter the username. If I try and enter the username on the new line the deploy just does nothing. This is happen on an Ubuntu 12.04 desktop to an Ubuntu 12.04 server.
I tried adding the 'set :scm_username' option to deploy.rb but that had no effect. I tried in the Ubuntu terminal and in the Terminal view inside Aptana.
Try to import the SSH-Key of your Deployment-Server into GitHub.
You can do this here: SSH Keys.
You can find the tutorial for SSH-Keys (Github) here: Generating SSH Keys
[EDIT]
Actually I just did it myself and checked the output.
Are you sure you set your set :repository, "git#github.com:[GitHub-User]/[GitHub-Repository-Name].git"in the deploy.rb correctly?
because your
else git clone -q https://github.com/username/dflabs1.git
looks here like
else git clone -q git#github.com:[GitHub-User]/[GitHub-Repository-Name].git