Mappings between Docker Remote API and its command line client - api

Docker documentation is pretty good at describing what you can do from the command line.
It also gives a pretty comprehensive description of the commands associated with the remote API.
It does not, however, appear to give sufficient context for using the remote API to do things that one would do using the command line.
An example of what I am talking about: suppose you want to do a command like:
docker run --rm=true -i -t -v /home/user/resources:/files -p 8080:8080 --name SomeService myImage_v3
using the Remote API. There is a container "run" command in the Remote API:
POST /containers/(id or name)/start
And this command refers back to the create container command for the rather long list of JSON strings that you would need to add in order to do the actual start.
The problem here is: first, just calling this command doesn't work. Apparently there is more that you have to do (I am guessing you have to do a create, then a start). Second, it is unclear which JSON strings you need to use in order to do what I showed in the command line (like setting ports, mapping to the external directory, etc). Not only do the JSON strings provided in the remote API documentation not line up with the command line parameters (at least, not in any way that is obvious!), but it is unclear which JSON strings are required for the create (assuming that we have to do a create, which isn't established yet!) and which are required for the start.
This is just related to starting a container. Suppose you want to stop and destroy a container, as in:
docker stop SomeService
docker rm SomeService
Granted, there appear to be one- to- one commands for doing this in the remote API:
POST /containers/(id or name)/stop
POST /containers/(id or name)/kill
But it seems that the IDs you can pass them do not correspond to the IDs shown when you list containers or images.
Is there somewhere I can go to gather information on how to set up and use remote API commands that relates these commands and their JSON parameters to the commands and parameters in the command line?
Failing that, can someone please tell me how to do the start that I showed in my illustration using the remote API???
In any event: is there someone working on docker development I can bring these documentation issues to? It is, I believe, a big "hole" in their documentation.
Someone please advise...

docker run is a combination of docker create, followed by docker start, so https://docs.docker.com/engine/reference/api/docker_remote_api_v1.22/#create-a-container, followed by https://docs.docker.com/engine/reference/api/docker_remote_api_v1.22/#start-a-container
If you're running "interactively", you may need to attach to the container after that; https://docs.docker.com/engine/reference/api/docker_remote_api_v1.22/#attach-to-a-container

Related

Can I frontload user input, automating Google Cloud SDK gcloud init - interactive command?

I have a very similar question to this one. #cherba already gave a very rich and helpful dissection of the gcloud init command which has been very helpful.
So what I really want to do, automating gcloud init is:
Front load my interactive input: I want the users to supply all input at the beginning and not be prompted again.
Request a token, before gcloud is even installed, probably from a static perma-link, the resulting token should be usable only once, probably with a limited lifetime, maybe an hour. This is very similar to how gcloud init —-console-only already works, except with an unchanging initial URL.
I specifically want this to be for a user account, not a service account.
This would allow me to prompt the user, upfront, for all configuration input, and build the fully configured system automatically, over lunch or a long coffee break; not needing additional babysitting.
The goal here is distinct development environments, not deploying to an array of boxes.
How can I accomplish this?
This is not supported officially and is not recommended. Service accounts are meant for this kind of thing. You should use service accounts as explained in the earlier answer.
What the SDK is essentially doing is submitting a token request to https://accounts.google.com/o/oauth2/auth with following scopes:
'https://www.googleapis.com/auth/userinfo.email'
'https://www.googleapis.com/auth/cloud-platform'
'https://www.googleapis.com/auth/appengine.admin'
'https://www.googleapis.com/auth/compute'
'https://www.googleapis.com/auth/accounts.reauth'
For this to succeed you need to provide the regular oauth parameters like client_id, client_secret. To generate these you will need to register your app as an oauth app in the developer console.
This may not work if third party authorizations are not supported. I have not tried it.
You said "Front load my interactive input:" and also "Request a token, before gcloud is even installed". The problem with your request above, is that you will need to install gcloud at some point in time and gcloud will use its own authentication methods to connect, meaning that authentication should happen after gcloud is installed because you will always use the command “gcloud ….” to somehow connect. The previous post that you linked explains this.
Due to this, I'm suspecting that you need a workflow where simultaneous gcloud commands will run on multiple users/projects at the same time, by running gcloud many times in parallel. As you know, Linux runs one command at a time and "front loading" the authentication (as you call it) can either be the "screen" command inside one SSH session or running multiple SSH sessions at the same time. If that's not what you need, then a simple shell script should do. The shell script will run commands one after the other rather than in parallel.
For example, let's say that you want to install a package that will take a long time and be able to run another command at the same time, then you could do the following:
$ screen
$ sudo apt-get install [package-name]
Press Ctrl-A” and “d“ to temporarily exit this session
$ … (do another process here)
$ screen -r (re-attaches screen to continue on previous process on line 2)
The example above is somewhat the equivalent of having multiple SSH sessions open at the same time. You could maybe open multiple “screens” and launch multiple authentications at the same time, thereby also controlling when you want to stop a session. Keep in mind that if you run things in parallel, you will definitely need to load the authentication file as mentioned in the post you linked. Otherwise, you can use simple shell scripting and pass arguments. Since i'm not sure of the process that comes before/after your authentication, it's hard for me to provide a more precise example. There's a lot to consider and many unknowns about your workflow. I've included references below that show all the possibilities.
References:
- https://www.linode.com/docs/networking/ssh/using-gnu-screen-to-manage-persistent-terminal-sessions/
- https://www.geeksforgeeks.org/screen-command-in-linux-with-examples/
- https://www.lifewire.com/pass-arguments-to-bash-script-2200571
- https://cloud.google.com/sdk/gcloud/reference/auth/activate-service-account
- https://cloud.google.com/sdk/gcloud/reference/auth/login
- https://cloud.google.com/sdk/docs/scripting-gcloud

How to figure out port information in mininet

I use python to create a custom mininet topology. To know the topology in detail is not important for the question.
I use ryu as controller. Especially I use the app "ofctl_rest.py". This controller does not install rules in the switch on its own. You have to issue rest - commands to establish rules. In every rest request (rule) you have to specify an outgoing port. To specify this port I need information about the topology of the network.
I need to know which link is connected to a port. I need to know which interface the port runs on. Also helpful would be to know the foreign interface, foreign switch/host, and foraign port of the actual port. How can I retrieve this information???
Please help me. I am really frustrated right now, because I do not know how to figure it out.
Inside the mininet CLI you can use the net command to find out about the topology. The nodes command will show you a list of nodes.
You can also use the dump command to display the interface details.
For information on the 'hosts', such as they are, you can run normal linux commands on each host, e.g.
mn> h1 ifconfig
will run ifconfig on host h1, showing you some of the network configuration for that host.
Given that you seem to be running mininet from a custom script, you could start the CLI at the end of your script (if that's possible) e.g.
net = Mininet(your_topo)
net.start()
CLI(net)
net.stop()
Otherwise, you can use the mininet python APIs to find much of the information.
the dump* functions in mininet.util will print out lots of information.
topo.links() will give you a list of the links in the topology.
topo.linkinfo() might give you some extra info.
For flow information you can either run ovs-dpctl, ovs-ofctl etc. outside of mininet (in a normal shell), or run the equivalents without the ovs- prefix inside the mininet CLI.

Use ssh script return value in Jenkins

We're deploying our application using SSH scripts. For the production stage we need to figure out which out of two clusters is currently active. This can only be achieved reliably by running a command on a remote host and interpreting its output. Unfortunately there's no SSH plugin that does that AFAIK.
They only seem to be able to interpret if the SSH script return value was different from zero.
Currently I only see two undesirable solutions:
use SSH in a script like Python, Groovy, etc. (means, we would have to provide SSH authentication to it somehow)
Let the SSH-command write to a file, that is then copied to Jenkins and interpreted there (unelegant and cumbersome)
Ok based on what you mentioned in the comment, I think you can try something like given in here and then copy back that file to jenkins using ftp and then read the file contents.
Or you can have the whole process orchestrated in an Ant script by using SSHExec task and get the output in Ant

Connecting to a running docker container - differences between using ssh and running a command with "-t -i" parameters

Could you please point me what is the difference between installing openssh-server and starting a ssh session with a given docker container and running docker run -t -i ubuntu /bin/bash and then performing some operations. How does docker attach compare to those two methods?
Difference 1. If you want to use ssh, you need to have ssh installed on the Docker image and running on your container. You might not want to because of extra load or from a security perspective. One way to go is to keep your images as small as possible - avoids bugs like heartbleed ;). Whether you want ssh is a point of discussion, but mostly personal taste. I would say only use it for debugging, and not to actually change your image. If you would need the latter, you'd better make a new and better image. Personally, I have yet to install my first ssh server on a Docker image.
Difference 2. Using ssh you can start your container as specified by the CMD and maybe ENTRYPOINT in your Dockerfile. Ssh then allows you to inspect that container and run commands for whatever use case you might need. On the other hand, if you start your container with the bash command, you effectively overwrite your Dockerfile CMD. If you then want to test that CMD, you can still run it manually (probably as a background process). When debugging my images, I do that all the time. This is from a development point of view.
Difference 3. An extension of the 2nd, but from a different point of view. In production, ssh will always allow you to check out your running container. Docker has other options useful in this respect, like docker cp, docker logs and indeed docker attach.
According to the docs "The attach command will allow you to view or interact with any running container, detached (-d) or interactive (-i). You can attach to the same container at the same time - screen sharing style, or quickly view the progress of your daemonized process." However, I am having trouble in actually using this in a useful manner. Maybe someone who uses it could elaborate in that?
Those are the only essential differences. There is no difference for image layers, committing or anything like that.

WinSCP: Current SFTP-3 session does not support command you request. Separate shell session may be opened to process the command

I'm using WinSCP to interact with a remote server that supports only SFTP and doesn't allow SSH access.
My interaction involves moving/deleting a subset of files (identified by file names) in a certain directory.
To simplify this, I would typically synchronize [ Remote -> Local ], delete the files locally using the cygwin commandline (so that I can specify a list of file names instead of selecting files in the GUI) and then synchronize [ Local -> Remote ] to push the deletes to remote.
But, now, I want to further simplify the process so I can hand this over to an operations person. I went looking and was delighted to find that WinSCP supports 'commands'.
It would be great if I could enter something like this in the 'Command' field at the bottom in the 'Commander' view of WinSCP:
get queue-queue-from-DLQ-ID-69703273-db51-11e1-ba9f-005056010165 \
queue-queue-from-DLQ-ID-3d64697a-db51-11e1-b86e-005056010166 \
queue-queue-from-DLQ-ID-76fdb365-db50-11e1-b78d-005056010164 \
queue-queue-from-DLQ-ID-76ed3836-db50-11e1-ba9f-005056010165
But when I enter this in the 'Command' field, I get the following error:
Current SFTP-3 session does not support command you request. Separate shell session may be opened to process the command. Do you want to open separate shell session?
When I hit ok, I get the following error:
Error skipping startup message. Your shell is probably incompatible with the application (BASH is recommended).
The latter one is probably due to the fact that SSH is not supported.
But my question is, since get is an SFTP command, why am I getting the first error? Doesn't WinSCP itself use that command under the covers to support a GUI 'copy to local' operation?
How can I configure either WinSCP or the Linux box so that I can do what I have shown above?
I guess this answers my question: http://winscp.net/eng/docs/remote_command
Apparently, the 'Command' feature is only supported for SCP.
I wonder why WinSCP can't expose a commandline interface for SFTP operations that are generally supported during an sftp interactive session.
You can use WinSCP command-line scripting interface to run the get command.
https://winscp.net/eng/docs/scripting
The 'Commands' feature (remote commands execution) is supported even for SFTP protocol. But this feature executes the command on remote server. You cannot use this feature to automate WinSCP. And there's no remote command that you can easily use to download file.
See https://winscp.net/eng/docs/remote_command