In first container I created a volume:
docker run
-d
-v recordings:/tmp/recordings
--name janus
menet docker.io/swmansion/janus-gateway:latest
Now I need to share recordings volume with another container that will be built by jib-maven-plugin.
Recently they added support for volumes but this seems not to support named volumes but only absolute paths.
https://github.com/GoogleContainerTools/jib/issues/1121
Any way to add named volumes with this plugin that may be shared between containers?
Related
Context
Our current build system builds docker images inside of a docker container (Docker in Docker). Many of our docker builds need credentials to be able to pull from private artifact repositories.
We've handled this with docker secrets.. passing in the secret to the docker build command, and in the Dockerfile, referencing the secret in the RUN command where its needed. This means we're using docker buildkit. This article explains it.
We are moving to a different build system (GitLab) and the admins have disabled Docker in Docker (security reasons) so we are moving to Kaniko for docker builds.
Problem
Kaniko doesn't appear to support secrets the way docker does. (there are no command line options to pass a secret through the Kaniko executor).
The credentials the docker build needs are stored in GitLab variables. For DinD, you simply add those variables to the docker build as a secret:
DOCKER_BUILDKIT=1 docker build . \
--secret=type=env,id=USERNAME \
--secret=type=env,id=PASSWORD \
And then in docker, use the secret:
RUN --mount=type=secret,id=USERNAME --mount=type=secret,id=PASSWORD \
USER=$(cat /run/secrets/USERNAME) \
PASS=$(cat /run/secrets/PASSWORD) \
./scriptThatUsesTheseEnvVarCredentialsToPullArtifacts
...rest of build..
Without the --secret flag to the kaniko executor, I'm not sure how to take advantage of docker secrets... nor do I understand the alternatives. I also want to continue to support developer builds. We have a 'build.sh' script that takes care of gathering credentials and adding them to the docker build command.
Current Solution
I found this article and was able to sort out a working solution. I want to ask the experts if this is valid or what the alternatives might be.
I discovered that when the kaniko executor runs, it appears to mount a volume into the image that's being built at: /kaniko. That directory does not exist when the build is complete and does not appear to be cached in the docker layers.
I also found out that if if the Dockerfile secret is not passed in via the docker build command, the build still executes.
So my gitlab-ci.yml file has this excerpt (the REPO_USER/REPO_PWD variables are GitLab CI variables):
- echo "${REPO_USER}" > /kaniko/repo-credentials.txt
- echo "${REPO_PWD}" >> /kaniko/repo-credentials.txt
- /kaniko/executor
--context "${CI_PROJECT_DIR}/docker/target"
--dockerfile "${CI_PROJECT_DIR}/docker/target/Dockerfile"
--destination "${IMAGE_NAME}:${BUILD_TAG}"
Key piece here is echo'ing the credentials to a file in the /kaniko directory before calling the executor. That directory is (temporarily) mounted into the image which the executor is building. And since all this happens inside of the kaniko image, that file will disappear when kaniko (gitlab) job completes.
The developer build script (snip):
//to keep it simple, this assumes that the developer has their credentials//cached in a file (ignored by git) called dev-credentials.txt
DOCKER_BUILDKIT=1 docker build . \
--secret id=repo-creds,src=dev-credentials.txt
Basically same as before. Had to put it in a file instead of environment variables.
The dockerfile (snip):
RUN --mount=type=secret,id=repo-creds,target=/kaniko/repo-credentials.txt USER=$(sed '1q;d' /kaniko/repo-credentials.txt) PASS=$(sed '2q;d' /kaniko/repo-credentials.txt) ./scriptThatUsesTheseEnvVarCredentialsToPullArtifacts...rest of build..
This Works!
In the Dockerfile, by mounting the secret in the /kaniko subfolder, it will work with both the DinD developer build as well as with the CI Kaniko executor.
For Dev builds, DinD secret works as always. (had to change it to a file rather than env variables which I didn't love.)
When the build is run by Kaniko, I suppose since the secret in the RUN command is not found, it doesn't even try to write the temporary credentials file (which I expected would fail the build). Instead, because I directly wrote the varibles to the temporarily mounted /kaniko directory, the rest of the run command was happy.
Advice
To me this does seem more kludgy than expected. I'm wanting to find out other/alternative solutions. Finding out the /kaniko folder is mounted into the image at build time seems to open a lot of possibilities.
Using the Azure CLI, I'm trying to add environment variables to an existing azure container with the following command:
$ az container create --resource-group toms-cool-group --name my-cool-container --image my-cool-container:v1 --environment-variables 'NumWords'='5' 'MinLength'='8'
But I get the following error back:
The updates on container group 'receipt-validator' are invalid. If you are going to update the os type, restart policy, network profile, CPU, memory or GPU resources for a container group, you must delete it first and then create a new one.
Any ideas?
Why not, you can add the environment variables to the existing azure container with the command as you showed:
az container create --resource-group toms-cool-group --name my-cool-container --image my-cool-container:v1 --environment-variables 'NumWords'='5' 'MinLength'='8'
As I see the error shows the group 'receipt-validator' is not the same as the group in the command toms-cool-group. Maybe it's the mistake you made. And additional, when you add the environment variables, the difference is only the environment variables which you want to add in the command, but others are the same.
The test on my side here:
By the way, actually, the update is just a redeploy for the azure container. The difference is that the redeploy is its container image layers are pulled from those cached by the previous deployment.
This is part of a major issue i've been fighting to get resolve in a span of 2 or even 3 weeks, first of all, i'm not a docker expert, in fact, i don't even know a thing about docker, all i know is that i need to use it in order to make a connection between an api in localhost and my app in react native, the thing is, i manage to make it work on another two projects i created to test docker, but not in the one i actually need to. This is a dockerfile for an api in .net core 2.2
my dockerfile is a combination of the code i found in stackoverflow and the example in docker documentation to create a docker in .net core, this specific file worked for me on another two api, one as a blank project, and the other one with a class library.
The code below shows the dockerfile, when i run the command line and create the image, it shows no errors, but i know there is something wrong, because when i run docker image ls, the docker image is around 200-300mb size, which seems way too small, and when i run that image with docker run... and check the list of docker containers runnning, it shows nothing
FROM mcr.microsoft.com/dotnet/core/sdk:2.2 AS build-env
WORKDIR /app
# Copy csproj and restore as distinct layers
WORKDIR /src
COPY ISARRHH.sln ./
COPY ISARRHH.BusinessGraph/*.csproj ./ISARRHH.BusinessGraph/
COPY ISARRHH.APIWeb/*.csproj ./ISARRHH.APIWeb/
RUN dotnet restore
# Copy everything else and build
COPY . ./
WORKDIR /src/ISARRHH.BusinessGraph
RUN dotnet publish -c Release -o /app
WORKDIR /src/ISARRHH.APIWeb
RUN dotnet publish -c Release -o /app
# Build runtime image
FROM mcr.microsoft.com/dotnet/core/aspnet:2.2
WORKDIR /app
COPY --from=build-env /app .
ENTRYPOINT ["dotnet", "isarrhh.dll"]
#######################################################
I want this bloody docker to work, this was the plan b on one of the modules i'm working on, and is giving me a headache, i managed to make it work on another project, i want it to work on this api which works with office 365 and sharepoint
EDIT: this is the project structure
ISARRHH (Solution)
|
|--ISARRHH.APIWeb (API)
| |_Dependencies
| |_Controllers
| |_Models
| |_Properties
| |_appsettings.json
| |_appsettings.Development.json
| |_Authentication.cs
| |_Configuration.cs
| |_Program.cs
| |_ProtectedApiCallHelper.cs
| |_PublicAppUsingUsernamePassword.cs
| |_SiteInformation.cs
| |_Startup.cs
| |_SiteInformation.cs
|
|--ISARRHH.BusinessGraph (Class Library)
| |_Dependencies
| |_UserGraph.cs
|
|--Solution Items
|_Dockerfile
|_.dockerignore
EDIT2: More information
REPOSITORY TAG IMAGE ID CREATED SIZE
isarrhh latest 67fc0628c921 13 minutes ago 268MB
according to this, the image was created succesfully apparently, but when i run it with
docker run -d -p 3001:80 ...
then i check with
docker container ls
i see no container running, also, when i check with the command you provided here
docker logs -t isachile
i get this:
MacBook: ISARRHH$ docker logs -t isachile
2019-07-31T18:49:22.553317346Z Did you mean to run dotnet SDK commands? Please install dotnet SDK from:
2019-07-31T18:49:22.553390430Z https://go.microsoft.com/fwlink/?LinkID=798306&clcid=0x409
EDIT 3: SOLVED IT -- SORT OF...
i manage to run my docker by manually copy and pasting ever file on a different project, each file individually copy and paste in this second project, and each time creating the docker image, yes, a seriously horrible and tedious process, but it worked, although, we're not considering this solution anymore, since the process is too slow for our scrum project, we need to connect react native to our localhost api, i still need an answer for this
So there's two things here, and neither necessarily indicates a problem with Docker or your Dockerfile.
Size is only 200-300MB
That's about right. You haven't indicated whether you're using Windows or Linux containers, but in either case, most of the weight comes simply from the .NET Core runtime. The whole point of containers is that the host OS is shared (unlike a VM where every VM gets its own separate OS installation). The only things coming from the base OS image are user-specific files and directories. The main system components are proxied to the host operating system. Long and short, I don't know what you're expecting here in terms of size, but honestly 200-300MB is a bit on the large size for an image. It's possible in many cases to package ASP.NET Core app images down to as little as 25MB-30MB, though if you include the full runtime, it's generally going to be closer to your 200-300MB.
The container isn't running.
All the means is that it exited. When the container is run, the entrypoint line will be called, which just starts up the ASP.NET Core app running in Kestrel. That of course runs Program.Main, since it's just a console app, after all. That in turn builds the web host and calls Run, which listens for TCP socket connections, keeping the app running, which therefore keeps the container running.
If the container isn't running, then the app exited. That could happen for different reasons, but the most likely cause is that a runtime exception was thrown during the web host build phase (i.e. something in Program or Startup is throwing an exception). Try running something like:
docker logs -t {container name}
And you'll probably see a stacktrace and exception there. Fix the issue accordingly.
The suggested strategy to manage and backup data in docker looks something like this:
docker run --name mysqldata -v /var/lib/mysql busybox true
docker run --name mysql --volumes-from mysqldata mysql
docker run --volumes-from mysqldata -v $(pwd):/backup ubuntu tar cvf /backup/backup.tar /var/lib/mysql
However, when I backup running containers that way, I won't get a consistent backup, would I?
I'm aware of tools like mysqldump, but what if I need to backup, for example, a folder to which files are constantly added and removed?
The underlying problem you are facing, i.e. backing up changing files is independent of docker. Use a tool such as rsnapshot or dirvish to make backups into a volume and then use the approach you mentioned above to move those backups to somewhere safer like Amazon s3 or glacier based on your reliability requirement.
Whether you mount volumes from another container or the host vm using the -v switch the changes to the files are reflected in all containers (or host vm) in more or less real-time. (There is some delay because of the AUFS that docker uses on top of host fs, but its not huge). If the backup container was running perpetually it could keep taking backups and the files would always reflect latest files seen by the mysql container.
Edit: For clarity.
I am working on a website powered by Node. So I have made a simple Dockerfile that adds my site's files to the container's FS, installs Node and runs the app when I run the container, exposing the private port 80.
But if I want to change a file for that app, I have rebuild the container image and re-run it. That takes some seconds.
Is there an easy way to have some sort of "live sync", NFS like, to have my host system's app files be in sync with the ones from the running container?
This way I only have to relaunch it to have changes apply, or even better, if I use something like supervisor, it will be done automatically.
You can use volumes in order to do this. You have two options:
Docker managed volumes:
docker run -v /src/path nodejsapp
docker run -i -t -volumes-from <container id> bash
The file you edit in the second container will update the first one.
Host directory volume:
docker run -v `pwd`/host/src/path:/container/src/path nodejsapp
The changes you make on the host will update the container.
If you are under OSX, those kind of volume shares can become very slow, especially with node-based apps ( a lot of files ). For this issue, http://docker-sync.io can help, by providing a volume-share like synchronisation, without using volume shares, this usually speeds up your container read/write speed of the code-directory from 50-80 times, depending on what docker-machine you use.
For performance see https://github.com/EugenMayer/docker-sync/wiki/4.-Performance and for easy examples how to use it, see the boilerplates https://github.com/EugenMayer/docker-sync-boilerplate for your case the unison example https://github.com/EugenMayer/docker-sync-boilerplate/tree/master/unison is the one you would need for NFS like sync
docker run -dit -v ~/my/local/path:/container/path/ myimageId
For /container/path/ you could use for instance /usr/src/app.
The flags:
-d = detached mode,
-it = interactive,
-v + paths = specifies the volume.
(If you just care about the volume, you can drop the -dit flag.)
Docker run reference
I use Scaffold's File Sync functionality for this. It gets the job done, and without needing overly complex configuration.
Setting up Scaffold in my project was as simple as installing Skaffold (through chocolatey, since I'm on Windows), running skaffold init --generate-manifests in my project folder, and answering a couple questions it asked.