Revert a Docker container back to its original image without restarting it? - testing

Normally, people are all about making Docker persist data in their containers and there are about twenty million questions on how to do exactly that, but I'm a tester and I want to dump all that crap I just did to my data and revert back to my known state (aka my image).
I'm aware I can do this by spinning up a new container based on my image but this forces me to disconnect and reconnect any network connections to my container and that's a huge pain.
Is it possible to revert a running container back to its original image without restarting it?

Sadly while it's running you won't be able to revert or change the image. You'll need to stop your running containers and remove them. Once your volumes are no longer attached to any containers, running the command docker volume prune will destroy all volumes not currently attached to containers.
Then you can simply restart your docker containers from the images, and you'll have a fresh start again.
I also found this article to be a great reference when I was learning docker: https://web.archive.org/web/20190528002402/https://medium.com/the-code-review/top-10-docker-commands-you-cant-live-without-54fb6377f481

To revert back to the original state, you have to restart the container - this is important because a container image is just a bunch of files, the actual running container must start some process and because of that, you cannot revert the container while running, since that process will most likely have issues.
So to answer your question - restart the container, a docker image only takes milliseconds to start up - the rest of the time is the process starting up.

Do not mount a volume to the container. Volumes, whether a data or a fs mount are persistent. If you do not persist the data you can then go docker restart my container.

I'm in a Windows environment. This script shown below works for me. Basically you are deleting the container (which is ok because it is easily rebuilt from the image when docker up is called) and then deleting the now orphaned volumes.
This deletes ALL of the containers running in Docker which works for me as I'm only running one app. If you are running multiple apps you will probably want to modify your solution.
I'm not sure how to delete just the top level app by name.
(replace "myapp" with the name of your app)
#echo off
echo.
echo.
echo Deleting Containers...
FOR /f "tokens=*" %%i IN ('docker ps -aq') DO docker rm %%i
echo.
echo Pruning orphaned volumes
docker volume prune -f
echo.
echo Starting myapp...
docker-compose -p myapp -f ../tools/docker-compose.yml up --remove-orphans
echo.
echo.
echo Done.
echo.
echo.

In my experience, the quickest way to reset your environment does include taking the container down. But it's really not that painful.
Docker-compose can help you here:
docker-compose down
docker-compose up -d
That's it.

Related

Which WSL distro is using AppData\Local\Docker\wsl\data\ext4.vhdx after docker-desktop-data was exported and unregistered

Due to increasing space consumption of WSL I was forced to move my WSL distros to another disk.
Ubuntu
docker-desktop
docker-desktop-data
I used these commands.
wsl --shutdown
wsl --export (on all three of those distros)
wsl --import (already on another disk)
Now my environment is running fine but the ext4.vhdx in AppData\Local\Docker\wsl\data is still present and I can't remove it due to it still being used.
When I look at process hadnles
Its still being used by system which is not telling much.
If I run WSL --shutdown all virtual disks present on disk E: lose their handles and the one on disk C: is still being used.
Would you know how to find out what part of WSL or if it even is WSL is using?
Since shutting down WSL does not remove that handle it might be used by something else.
Its not docker-for-desktop that one uses different disk.
Thanks for your suggestions.
Docker Desktop for Windows, which uses WSL2, stores all image and container files in a separate virtual volume (vhdx). This virtual hard disk file can automatically grow when it needs more space (to a certain limit). Unfortunately, if you reclaim some space, i.e. by removing unused images, vhdx doesn't shrink automatically. Luckily, you can reduce its size manually by calling this command in PowerShell (as Administrator):
Optimize-VHD -Path $Env:LOCALAPPDATA\Docker\wsl\data\ext4.vhdx -Mode Full
If the above command fails with
The system failed to compact 'C:\Users\Maxx\AppData\Local\Docker\wsl\data\ext4.vhdx':
The process cannot access the file because it is being used by another process. (0x80070020).
exit form Docker Desktop or stop services and tasks using that file:
net stop com.docker.service
taskkill /IM "docker.exe" /F
taskkill /IM "Docker Desktop.exe" /F
wsl --shutdown
I reclaimed 15Gb of 40Gb.
Origin of the solution.
You can just clean data from interface. Troubleshooting -> Clean/Purge data
Upgrading from WSL1 to WSL2 made it a bit messy, but resetting docker-desktop to its default setting and then purging data from WSL (using docker-desktop troublesshot) cleared it for me.

docker image not working or running properly

This is part of a major issue i've been fighting to get resolve in a span of 2 or even 3 weeks, first of all, i'm not a docker expert, in fact, i don't even know a thing about docker, all i know is that i need to use it in order to make a connection between an api in localhost and my app in react native, the thing is, i manage to make it work on another two projects i created to test docker, but not in the one i actually need to. This is a dockerfile for an api in .net core 2.2
my dockerfile is a combination of the code i found in stackoverflow and the example in docker documentation to create a docker in .net core, this specific file worked for me on another two api, one as a blank project, and the other one with a class library.
The code below shows the dockerfile, when i run the command line and create the image, it shows no errors, but i know there is something wrong, because when i run docker image ls, the docker image is around 200-300mb size, which seems way too small, and when i run that image with docker run... and check the list of docker containers runnning, it shows nothing
FROM mcr.microsoft.com/dotnet/core/sdk:2.2 AS build-env
WORKDIR /app
# Copy csproj and restore as distinct layers
WORKDIR /src
COPY ISARRHH.sln ./
COPY ISARRHH.BusinessGraph/*.csproj ./ISARRHH.BusinessGraph/
COPY ISARRHH.APIWeb/*.csproj ./ISARRHH.APIWeb/
RUN dotnet restore
# Copy everything else and build
COPY . ./
WORKDIR /src/ISARRHH.BusinessGraph
RUN dotnet publish -c Release -o /app
WORKDIR /src/ISARRHH.APIWeb
RUN dotnet publish -c Release -o /app
# Build runtime image
FROM mcr.microsoft.com/dotnet/core/aspnet:2.2
WORKDIR /app
COPY --from=build-env /app .
ENTRYPOINT ["dotnet", "isarrhh.dll"]
#######################################################
I want this bloody docker to work, this was the plan b on one of the modules i'm working on, and is giving me a headache, i managed to make it work on another project, i want it to work on this api which works with office 365 and sharepoint
EDIT: this is the project structure
ISARRHH (Solution)
|
|--ISARRHH.APIWeb (API)
| |_Dependencies
| |_Controllers
| |_Models
| |_Properties
| |_appsettings.json
| |_appsettings.Development.json
| |_Authentication.cs
| |_Configuration.cs
| |_Program.cs
| |_ProtectedApiCallHelper.cs
| |_PublicAppUsingUsernamePassword.cs
| |_SiteInformation.cs
| |_Startup.cs
| |_SiteInformation.cs
|
|--ISARRHH.BusinessGraph (Class Library)
| |_Dependencies
| |_UserGraph.cs
|
|--Solution Items
|_Dockerfile
|_.dockerignore
EDIT2: More information
REPOSITORY TAG IMAGE ID CREATED SIZE
isarrhh latest 67fc0628c921 13 minutes ago 268MB
according to this, the image was created succesfully apparently, but when i run it with
docker run -d -p 3001:80 ...
then i check with
docker container ls
i see no container running, also, when i check with the command you provided here
docker logs -t isachile
i get this:
MacBook: ISARRHH$ docker logs -t isachile
2019-07-31T18:49:22.553317346Z Did you mean to run dotnet SDK commands? Please install dotnet SDK from:
2019-07-31T18:49:22.553390430Z https://go.microsoft.com/fwlink/?LinkID=798306&clcid=0x409
EDIT 3: SOLVED IT -- SORT OF...
i manage to run my docker by manually copy and pasting ever file on a different project, each file individually copy and paste in this second project, and each time creating the docker image, yes, a seriously horrible and tedious process, but it worked, although, we're not considering this solution anymore, since the process is too slow for our scrum project, we need to connect react native to our localhost api, i still need an answer for this
So there's two things here, and neither necessarily indicates a problem with Docker or your Dockerfile.
Size is only 200-300MB
That's about right. You haven't indicated whether you're using Windows or Linux containers, but in either case, most of the weight comes simply from the .NET Core runtime. The whole point of containers is that the host OS is shared (unlike a VM where every VM gets its own separate OS installation). The only things coming from the base OS image are user-specific files and directories. The main system components are proxied to the host operating system. Long and short, I don't know what you're expecting here in terms of size, but honestly 200-300MB is a bit on the large size for an image. It's possible in many cases to package ASP.NET Core app images down to as little as 25MB-30MB, though if you include the full runtime, it's generally going to be closer to your 200-300MB.
The container isn't running.
All the means is that it exited. When the container is run, the entrypoint line will be called, which just starts up the ASP.NET Core app running in Kestrel. That of course runs Program.Main, since it's just a console app, after all. That in turn builds the web host and calls Run, which listens for TCP socket connections, keeping the app running, which therefore keeps the container running.
If the container isn't running, then the app exited. That could happen for different reasons, but the most likely cause is that a runtime exception was thrown during the web host build phase (i.e. something in Program or Startup is throwing an exception). Try running something like:
docker logs -t {container name}
And you'll probably see a stacktrace and exception there. Fix the issue accordingly.

"Official" docker backup strategy - what about consistency?

The suggested strategy to manage and backup data in docker looks something like this:
docker run --name mysqldata -v /var/lib/mysql busybox true
docker run --name mysql --volumes-from mysqldata mysql
docker run --volumes-from mysqldata -v $(pwd):/backup ubuntu tar cvf /backup/backup.tar /var/lib/mysql
However, when I backup running containers that way, I won't get a consistent backup, would I?
I'm aware of tools like mysqldump, but what if I need to backup, for example, a folder to which files are constantly added and removed?
The underlying problem you are facing, i.e. backing up changing files is independent of docker. Use a tool such as rsnapshot or dirvish to make backups into a volume and then use the approach you mentioned above to move those backups to somewhere safer like Amazon s3 or glacier based on your reliability requirement.
Whether you mount volumes from another container or the host vm using the -v switch the changes to the files are reflected in all containers (or host vm) in more or less real-time. (There is some delay because of the AUFS that docker uses on top of host fs, but its not huge). If the backup container was running perpetually it could keep taking backups and the files would always reflect latest files seen by the mysql container.
Edit: For clarity.

Docker: How to live sync host folder with container folder?

I am working on a website powered by Node. So I have made a simple Dockerfile that adds my site's files to the container's FS, installs Node and runs the app when I run the container, exposing the private port 80.
But if I want to change a file for that app, I have rebuild the container image and re-run it. That takes some seconds.
Is there an easy way to have some sort of "live sync", NFS like, to have my host system's app files be in sync with the ones from the running container?
This way I only have to relaunch it to have changes apply, or even better, if I use something like supervisor, it will be done automatically.
You can use volumes in order to do this. You have two options:
Docker managed volumes:
docker run -v /src/path nodejsapp
docker run -i -t -volumes-from <container id> bash
The file you edit in the second container will update the first one.
Host directory volume:
docker run -v `pwd`/host/src/path:/container/src/path nodejsapp
The changes you make on the host will update the container.
If you are under OSX, those kind of volume shares can become very slow, especially with node-based apps ( a lot of files ). For this issue, http://docker-sync.io can help, by providing a volume-share like synchronisation, without using volume shares, this usually speeds up your container read/write speed of the code-directory from 50-80 times, depending on what docker-machine you use.
For performance see https://github.com/EugenMayer/docker-sync/wiki/4.-Performance and for easy examples how to use it, see the boilerplates https://github.com/EugenMayer/docker-sync-boilerplate for your case the unison example https://github.com/EugenMayer/docker-sync-boilerplate/tree/master/unison is the one you would need for NFS like sync
docker run -dit -v ~/my/local/path:/container/path/ myimageId
For /container/path/ you could use for instance /usr/src/app.
The flags:
-d = detached mode,
-it = interactive,
-v + paths = specifies the volume.
(If you just care about the volume, you can drop the -dit flag.)
Docker run reference
I use Scaffold's File Sync functionality for this. It gets the job done, and without needing overly complex configuration.
Setting up Scaffold in my project was as simple as installing Skaffold (through chocolatey, since I'm on Windows), running skaffold init --generate-manifests in my project folder, and answering a couple questions it asked.

Installing Trac to continually run

I have recently added Trac to my server to work with my Git Repo.
I can get it all working fine with tracd --port 8000 /path/to/myproject
But as soon as I close my Putty the site goes offline, whats the best way about getting Trac to continue running?
Have you tried
nohup tracd --port 8000 /path/to/myproject &
?
See nohup
You can then run multiple projects at once by simpling running multiple instances of tracd
nohup tracd --port 8000 /path/to/myproject1 &
nohup tracd --port 8001 /path/to/myproject2 &
nohup tracd --port 8002 /path/to/myproject3 &
And for a more correct answer about handling several projects, I redirect you to the documentation :) :
TracMultipleProjects/SingleEnvironment
TracMultipleProjects/MultipleEnvironments
Running Trac with another web-server is pretty common, if not the standard, if performance and serving many users matters to you. Then wsgi is generally recommended as current best practice. But Apache or another full-fledged web-server might be overkill for private/small work-group use, if you don't have one already running for other purposes. Up to 5 concurrent users can still be served by tracd, and you profit from the rather small footprint of this solutions.
But OP's question sprang from a failure to deploy tracd for the task anyway. I'll follow-up on this way of serving Trac now:
The best way to run tracd detached from the starting console is it's deamon mode:
./bin/tracd -p 8000 -d /data/trac/sandbox_1.0
See included help for many more valuable options:
>$ tracd --help
Usage: tracd [options] [projenv] ...
Options:
...
-p PORT, --port=PORT the port number to bind to
-r, --auto-reload restart automatically when sources are modified
-s, --single-env only serve a single project without the project list
-d, --daemonize run in the background as a daemon
-e PARENTDIR, --env-parent-dir=PARENTDIR
...
Note1: See even more about running tracd and related pages in wiki documentation at trac.egdewall.org, please.
Note2: Parent dir option allows an arbitrary number of project Trac evironment folders to get detected and run from a single instance of tracd. All just have to share a common path, means: Put them all into the same folder (your parent dir).
Note3: If you don't use the -s switch, tracd will display an project index page. Hint's about customizing that page are part of the excellent wiki documentation of the Trac project at trac.edgewall.org as well.
Check tracd options using tracd --help. There you will find a line which states:
-d, --daemonize run in the background as a daemon
Voila.