migrating lxc to lxd - migration

I've looked all over, but can't see if there is a way. I have a couple LXC containers running Ubuntu 14.04 on top of a Ubuntu 14.04 Host. They've become pretty important to me, so I want to be able to easily backup / migrate the LXC containers to another server if the host's hardware should fail.
I've built a new Ubuntu 15.1 server now with LXD and have logged out and back in and see the new group. For testing, I tar'd one of my existing LXC containers up with the --numeric-owner switch on my 14.04 Host:
tar --numeric-owner -czvf ContToBeMoved.tgz /var/lib/lxc/my_container
---then on new server ---
tar --numeric-owner -xzvf ContToBeMoved.tgz -C /var/lib/lxc/
...and have successfully restored the LXC container on the new server 15.1 server.
When I run the LXD commands though, LXD doesn't see the container. I tried moving the container to the /var/lib/lxd/containers directory and still, LXD doesn't see it. Is there a way to edit / clone / migrate the LXC container so that we can use LXD moving forward?
Thanks in advance.

LXD uses a sqlite database for container configuration so just dumping the container's rootfs in place won't be quite enough.
The easiest way to do what you want is to create a new container with LXD, then remove its rootfs from /var/lib/lxd/containers/NAME/rootfs and substitute the one from your original host.
Note that LXD runs unprivileged containers by default, if your source container was privileged (/var/lib/lxc/NAME/rootfs is owned by root:root instead of 100000:100000), then you'll want to run the following too:
lxc config set NAME security.privileged true

Related

How to get IP Address of Docker Desktop VM?

I'm in a team where some of us use docker toolbox and some user docker desktop. We're writing an application that needs to communicate to a docker container in development.
On docker toolbox, I know the docker-machine env command sets the docker host environment variable and I can use that to get the ip of the virtual machine that's running the docker engine. From there I just access the exposed ports.
What's the equivalent way to get that information on docker desktop? (I do not have a machine that has docker desktop, only docker toolbox but I'm writing code that should be able to access the docker container on both)
On windows OS, after installed docker, there is an entry added by docker inside your hosts file (C:\Windows\System32\drivers\etc\hosts), which states the IP as:
Added by Docker Desktop
10.xx.xx.xx host.docker.internal
Below section got added in my /etc/hosts:
# Added by Docker Desktop
192.168.99.1 host.docker.internal
192.168.99.1 gateway.docker.internal
Then I was able to access by adding the port to which the app was bind to.
This command should display the IP
ping -q -c 1 docker.local | sed -En "s/^.*\((.+)\).*$/\1/p"
ipconfig can get you this information as well

Is it possible to configure a virtualhost in a docker container with Apache?

I describe my doubt below:
I currently have Docker installed on my Windows computer. I have an Ubuntu 18.04 container, which has installed PHP 7.2, Apache2, and MariaDB. The port mapping is as follows:
docker run -it --name my_container -p 8080:80 -p 8081:3306 ubuntu:1804
Previously, before using Docker, I had configured a Virtual Host on my computer for a web project, something like http://my_project.dev to access it instead the typical http://localhost/projects/my_project.
Now that I changed my way of working to Docker, I have my project working perfectly on port 8080, something like this http://localhost:8080/projects/my_project, but I can't find a way to create a Virtual Host to access my project with http://mi_project.dev in my current Docker container.

How to connect with SSH or SFTP to a local DDEV Container?

I installed:
Docker https://docs.docker.com/docker-for-mac/install/
Homebrew https://brew.sh/
DDEV https://ddev.readthedocs.io/en/latest/#installation
Composer https://getcomposer.org/download/
I installed TYPO3 with:
composer create-project typo3/cms-base-distribution ddevtypo3 ^8
I configured DDEV with:
cd ddevtypo3
ddev config
and hit 3 times Enter for default-values for: project-name, docroot, project-type.
Now (nearly finished) I started DDEV with:
ddev start
Everything works fine: I become my 'Thank you for downloading TYPO3' Installwindow on my local DDEV Server ddevtypo3.ddev.local works.
Now I want to connect with my Coda2 to the Container. If I type ddev ssh in the Terminal, I come in the DDEV container, but how can I configure Coda2 to use SFTP or SSH to connect to the DDEV.
Somebody can give me the right hint?
Perhaps I have to configure SSH or SFTP for the DDEV.
Edit:
I want to use the SFTP Connection to just for editing files on the Container and SSH to connect with the Coda-Terminal to the Container.
I don't have to connect to the local Container with SSH, because I can edit the files directly local:
In Coda2
In the file-browser tab I can browse local (the left window).
And in the site-window I can press at the bottom left of the earth-globe and it's also the local filesystem.
I also can commit and push to my Gitlab with the terminal. I don't need the Coda2-SSH-Connection to my Container also for publishing my work to Git.
In the shell-tab click the drop-down of Connection and select localhost.
Or simply use the Terminal of the MacBook
I also can use ddev ssh to connect to the container (both ways)
I realize now: that was not a good question, but I don't delete it to get others informed for this thinking errors - and the way to get all done without the way I'd tried.
And for new ddev-users, like me... ;)

Not able to open the deck UI for spinnaker

I installed spinnaker using the command
bash <(curl --silent https://spinnaker.bintray.com/scripts/InstallSpinnaker.sh)
on a local ubuntu machine.
After installation I am not able to connect to the Deck UI of spinnaker using URL: http://localhost:9000
Check logs in /var/log/apache2 for errors, and /etc/apache2/ports.conf to see if it is is listening on 127.0.0.1:9000
The install script should have made those changes for you, but maybe you had a permissions issue or some other kind of local system policy preventing the installation from working properly.

How to make changes to httpd.conf of apache running inside DOCKER container and restart apache

I am new to docker. In our docker environment - Apache has been installed and it is up and running.
Now I need to get into the container, modify the httpd.conf, save it and then I need to restart the apache.
Can you guys please let me know, what needs to be done.
I am pretty much confused about -
'exec' and 'attach' commands.
No need to attach or exec (which is really a debug feature anyway)
You can use docker cp to copy a local version of your httpd.conf to the container. (That way, you can modify the file from the comfort of your local environment)
docker cp httpd.conf <yourcontainer_name>:/path/to/httpd.conf
Once that is done, you can send an USR1 signal to ask for a graceful restart (see docker kill syntax):
docker kill --signal="USR1" <yourcontainer_name>
Replace <yourcontainer_name> by the container id or name which is running Apache.
That will only work if the main process launched by your container is
CMD ["apachectl", "-DFOREGROUND"]
See more at "Docker: How to restart a service running in Docker Container"
To update Apache configs you need to:
Replace Apache configs.
If you have config folder mapped from outside of container you should update configs outside of container.
If your apache configs are stored inside of container, you will need to run something like this:
docker cp httpd.conf YOUR_CONTAINER_NAME:/path/to/httpd.conf
Do Graceful Apache restart:
sudo docker exec -it YOUR_CONTAINER_NAME apachectl graceful
Enter a container by opening a bash shell:
docker exec -it containerName bash
I guess you better just reload apache config and not reboot apache.
But I wouldn't go this route and just modify Dockerfile and rebuild and rerun the image.
edit for link: https://docs.docker.com/engine/reference/commandline/exec/