Cannot use pintos by ssh - ssh

I use a docker container to try pintos on my mac(M1). Everything behaves well when I start the container by docker start -i pintos.
However, when I use ssh to connect my docker container(i.e., ssh -p xxxx root#local), error message -bash: pintos: command not found occurs if I try pintos -- in directory /pintos/src/threads/build.

Related

How to keep WIndows Container running?

I need to keep my Windows Container up so I can run further commands on it using docker exec.
On Linux, I'd start it to run either sleep infinity, or tail -f /dev/null. Alternatively, I could borrow pause.c from Kubernetes.
What does this look like on Windows?
Use ping -t localhost will do it
A full run command would be:
docker run -d --name YourContainer mcr.microsoft.com/windows/nanoserver:1809 ping -t localhost
Note: Make sure 1809 is equal with your own windows version from [WIN]+[R] -> winver.
You should then be able to step into the running container instance with the name YourContainer:
docker exec -it YourContainer cmd
Kubernetes on Windows used to use ping
cmd /c ping -t localhost
This would print lots of unnecessary output, so a good improvement should be
cmd /c ping -t localhost > NUL
What Kubernetes does now is to run a custom pauseloop.exe binary.
In late 2022, the current home for wincat/pauseloop is https://github.com/kubernetes/kubernetes/tree/master/build%2Fpause%2Fwindows%2Fwincat. The move was implemented in https://github.com/kubernetes-sigs/sig-windows-tools/pull/270.

Ignore SSH timeout to continue Gitlab CI

I'm using gitlab CI to deploy my project on virtuals machines through SSH. Some of virtuals machines can be off at the moment of my deploy so my job fail when I can't reach one of these vm.
Here what I'm doing in my ci
- ssh -o StrictHostKeyChecking=no user#vm1 "mkdir -p /myproject/releases/$CI_COMMIT_TAG"
- ssh -o StrictHostKeyChecking=no user#vm1 "mkdir -p /myproject/releases/$CI_COMMIT_TAG/dev"
- rsync -az * user#vm1:/myproject/releases/$CI_COMMIT_TAG
At the first ssh command, I have this error :
ssh: connect to host vm1 port 22: Connection timed out ERROR: Job failed: exit status 1
How can I ignore SSH timeout to continue my gitlab ci ?
The best solution to me could be :
If the vm doesn't "answer" about 20 seconds, ignore it and try to deploy to the next vm.
Thank you very much :)
EDIT : I've got the same problem with rsync of course...
You can try adding a || true after each ssh to always return something which Travis will not interpret as an error, but would also wait until the command is done.
The best solution for my problem is a bash script.
ping the remote vm
if the vm answers to the ping : deploy

How to run container using image id or name?

I have created an apache server image from tar file using below command,
cat /home/ubuntu/docker-work/softwares/httpd-2.4.27.tar.gz | docker import - httpd:2.4
The image is created successfully and its name is httpd!
I have run below command,
docker run -d -p 80:80 --name=apache httpd:2.4
which is giving error,
docker: Error response from daemon: No command specified.
How do I run the above image using the name(httpd) ?
The error that you are getting means that the image import from the tar doesn't container a default command CMD line to start the container.
Docker allows you to not specify the CMD in the docker file, however in that case you need to provide the command when doing docker run. Example:
docker run -d -p 80:80 --name=apache httpd:2.4 httpd-foreground
Where httpd-foreground is the command that will start the httpd server process inside the container.

Mounting user SSH key in container

I am building a script that will mount some local folders into the container, one of which is the user's ~/.ssh folder. That way, users can still utilize their SSH key for Git commits.
docker run -ti -v $HOME/.ssh/:$HOME/.ssh repo:tag
But that does not mount the SSH folder into the container. Am I doing it incorrectly?
The typical syntax is (from Mount a host directory as a data volume):
docker run -d -P --name web -v /src/webapp:/opt/webapp training/webapp python app.py
(you can skip the command part, here 'app.py', if your image defines an entrypoint and default command)
(-d does not apply in your case, it did in the case of that python web server )
Try:
docker run -ti -v $HOME/.ssh:$HOME/.ssh repo:tag

Can't get clipboard forwarding working, while being in Docker under SSH

I have SSH connection to the server without X session. I've set up SSH to forward X. When I get connected to that server over SSH and type:
user#host-machine:~$ ssh remote-server
user#remote-server:~$ echo hello | xsel -bi
all goes right - I'm getting "hello" on my host's clipboard.
I have simple Docker image based on Ubuntu. I run the the image with following parameters: docker run -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix myimage bash. Then, being inside container I'm typing:
user#host-machine:~$ docker run -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix myimage bash
user#92c40ac9feb7:~$ echo hello | xsel -bi
all goes right - I'm getting "hello" on my host's clipboard.
Now I want to run Docker image on those server being under SSH session with X forwarding enabled. When I'm trying to copy something to the clipboard in this configuration, I'm getting following error:
user#host-machine:~$ ssh remote-server
user#remote-server:~$ docker run -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix myimage bash
user#688e48e4b90a:~$ echo hello | sudo xsel -bi
xsel: Can't open display: (null)
: Connection refused
user#688e48e4b90a:~$
Why I can't get clipboard forwarding working, while being in Docker container under SSH session?