I created a docker container. I can ssh into the docker container. How can I view files in my docker container with a GUI (specifically the WebStorm IDE)?
I'm running a MAC on OSX Yosemite 10.10.5.
The usual pattern is to mount your source code into the container as a volume. Your IDE can work with the files on your host machine and the processes running in the container can see the files. docs
There might be a way to set up remote file access with WebStorm, but I'd recommend trying the other approach first.
docker run daemon -v /mycodedir:/mydockerdir {libraryname}/{imagename} if you mount your work directory and map it to the directory running files in the container you will be able to edit and see them in Web Storm.
Related
I'm attempting to use my WSL2 docker containers with VS Code, though I now regret this. I attempted to follow these directions to get everything installed and configured correctly.
After installing Docker Desktop, my previous containers and images are not shown with docker ls and docker images when run from WSL2. However, there are still many GB of data under /var/lib/docker. Is there some way to attempt to recover this?
I'm aware that it's not a good idea to access WSL Linux files (located in %LOCALAPPDATA%\Packages\CanonicalGroupLimited.UbuntuonWindows_79rhkp1fndgsc\LocalState\rootfs\) directly from Windows, but does that recommendation also apply to mounting a WSL path as a volume in a container running under Docker for Windows?
For example, if I first do this on Windows:
mklink /j %USERPROFILE%\wsl %LOCALAPPDATA%\Packages\CanonicalGroupLimited.UbuntuonWindows_79rhkp1fndgsc\LocalState\rootfs
Then do this in WSL with Docker already configured:
$ docker run --rm -v /c/Users/$USER/wsl/home/$USER/myapp:/myapp -ti ubuntu:18.04 bash
The above assumes the requisite "root=/" in "/etc/wsl.conf" and that the user has the same name in both environments.
I can see my files inside the container under "/myapp" just fine, but I'm not sure whether it's safe to write to that path. If both WSL and the container are running Ubuntu, is it any safer?
I really prefer to work full-time from WSL with my home directory containing the familiar Linux dot files.
And just for kicks, what if in WSL "$HOME/myapp" is a symlink to "/c/myapp"? Yes, I should then just use -v /c/myapp:/myapp for simplicity, but is traversing through the rootfs paths really bad?
Accessing the file paths through Docker on Windows still uses Windows symantecs to access the files, therefore you will bork your WSL distro instance. However the newest Windows Insider includes a Plan9 server embedded into the proprietary /init that allows access of Linux files from Windows via network share essentially. See https://blogs.msdn.microsoft.com/commandline/2019/02/15/whats-new-for-wsl-in-windows-10-version-1903/
An alternative would be to use ssh/scp from win-32 ssh on the same Windows host (or another) or from a Linux host.
I have to build and push docker image via gitlab-ci. I have gone through the official document.
https://docs.gitlab.com/ee/ci/docker/using_docker_build.html
I want to adopt shell method but my issue is I already had a working gitrunner on my server machine. So what the procedure for it. If I tried to re-register the git runner on the same machine. will it impact the old one?
Thanks in advance.
Assuming that you installed gitlab-runner as a system service and not inside a container, you can easily register another shell runner on your server using the command gitlab-ci-multi-runner register.
This is indirectly confirmed by the advanced configuration documentation, which states that the config.toml of the gitlab-runner service may contain multiple [[runner]] sections.
Note: To allow the shell runner to build docker images, you will need to add the gitlab-runner user to the docker group, e.g.:
sudo gpasswd --add gitlab-runner docker
Am running my selenium java tests in Docker's chrome container installed in my windows system.
Tests to upload will pass if i run tests in windows - chrome, but failing with error path is not absolute: D:\xyz.csv if i run same tests in docker.
Am pushing my tests on chrome node in docker.
Normal selenium tests work in docker, but upload doesn't.
Please suggest on how to copy this file inside container to give that path for upload tests..
Thanks
That is because Chrome would look for that path in the system where it. But the container is a linux based system and the file paths are not like this.
So you need to share volume while launching the chrome container
docker run -v localfolder:containerfolder
and in your test you need to use the contaienrfolder path and not the localfolder path
I got the solution of this problem long time back.
Use below command to copy the files from windows/Linux system to Chrome container running in docker's say 'tmp' folder , this path can be later referenced in selenium tests running in Docker.
"docker cp D:\file.csv docker_chrome_1:/tmp/"
This above command can be run once Docker's Chrome container is up and running in Windows/Linux machine.
Recently I've been dabbling with vagrant and docker. These are quite interesting tools, but I haven't been able to convince myself that it's the way to go quite yet on my OS X machine. Being an old Unix hat, I have to say that I like having a consolidated and sandboxed environment for development purposes.
I've seen a lot of chatter and a number of friends have been using vagrant with just stock vim for editing. I'm not really a fan of that approach and would probably prefer to use the vm provider's sharing mechanism OR, more likely, NFS.
Personally I'd like to be able to edit directly in TextMate, SublimeText, Emacs (on OS X), or even perhaps use RubyMine and its various IDE features, etc.
Is there any way to really get the workflow down so that such an environment will be essentially like working on a local environment without having to pull a lot of additional background strings to make things work out?
I suppose a few well placed scripts could go a long way, but I've not found any solid answers on really making this a seamless environment.
What actually worked for me was to use boot2docker which makes it easy to install a lightweight virtual machine (with VirtualBox) that will host your docker deamon and images. The only thing you need in order to run docker commands is to run $(boot2docker shellinit) when you open a new Terminal.
If you need to also have your files on an OS X folder and share them with a running docker image, you need some additional setup, but once you do it, you won't have to do it again.
Have a look here for a nice walkthrough on how to do it. The steps in short are:
Get a special boot2docker image that allows you to use shared folders for VirtualBox
Configure VirtualBox to share a folder:
VBoxManage sharedfolder add boot2docker-vm -name home -hostpath /Users
This will share your /Users folder with the boot2docker image that hosts docker.
From you Mac share the folder you need with a folder in a docker image like:
docker run -it -v /Users/me/dev/my-project:/root/src:rw ubuntu /bin/bash
One small annoyance that I haven't found how to overcome is that you do not longer access your software through localhost because it actually runs on boot2docker instance. You have to run boot2docker ip and access that ip.
Hope that helps!