I use python to create a custom mininet topology. To know the topology in detail is not important for the question.
I use ryu as controller. Especially I use the app "ofctl_rest.py". This controller does not install rules in the switch on its own. You have to issue rest - commands to establish rules. In every rest request (rule) you have to specify an outgoing port. To specify this port I need information about the topology of the network.
I need to know which link is connected to a port. I need to know which interface the port runs on. Also helpful would be to know the foreign interface, foreign switch/host, and foraign port of the actual port. How can I retrieve this information???
Please help me. I am really frustrated right now, because I do not know how to figure it out.
Inside the mininet CLI you can use the net command to find out about the topology. The nodes command will show you a list of nodes.
You can also use the dump command to display the interface details.
For information on the 'hosts', such as they are, you can run normal linux commands on each host, e.g.
mn> h1 ifconfig
will run ifconfig on host h1, showing you some of the network configuration for that host.
Given that you seem to be running mininet from a custom script, you could start the CLI at the end of your script (if that's possible) e.g.
net = Mininet(your_topo)
net.start()
CLI(net)
net.stop()
Otherwise, you can use the mininet python APIs to find much of the information.
the dump* functions in mininet.util will print out lots of information.
topo.links() will give you a list of the links in the topology.
topo.linkinfo() might give you some extra info.
For flow information you can either run ovs-dpctl, ovs-ofctl etc. outside of mininet (in a normal shell), or run the equivalents without the ovs- prefix inside the mininet CLI.
Related
I have a server running several docker containers. I wanted to know about network usage of individual and some specific docker containers.
I wad able to get entire traffic using below api
http://<server-ip>:19999/api/v1/data?chart=net.docker0&after=-60&before=0&points=1&group=median>ime=0&format=json&options=seconds&options=jsonwrap
I goes through documentations and didn't find anything helpfull.
Usually, the issue with such questions comes from Netdata not being granted the access required to identify the docker container.
I'd take a look at https://learn.netdata.cloud/docs/agent/packaging/docker/#docker-container-names-resolution and go through https://github.com/netdata/netdata/issues/6882 as well.
If you know the specific container name then you should i think just be able to pull the data direct from the container specific chart.
For example i have a container called airbyte-webapp and so i can get its network usage via its own specific chart at /api/v1/data?chart=cgroup_airbyte-webapp.net_eth0
I'm pretty new to the Gcloud environment, but getting the hang of it.
Though with our first project live on an instance, I've been shuffeling some static IP's, instances and snapshots around for optimal deployment workflow. Though whats going on now, I can't understand;
I have two instances (i.e.) live-1 and dev-2.
Now I can connect to live-1 using gcloud compute ssh live-1 and it's okay.
When I try to connect to dev-2 using gcloud compute ssh dev-2, it logs me in to live-1.
The first time I tried to ssh to dev-2 it took longer than usual. After that it just connects me to the wrong instance immediately.
The goal was (as you might've guessed) to copy the live environment to a testing one. I did create an image of live-1, and cloned it to setup dev-2 with it. But in my earlier experience trying this, this was possible and worked as expected.
Whenever I use the Compute Console in the browser and use the online SSH tool from the instance list, it does connect to dev-2 properly. But on my local machine, using aformentioned command, connects me to live-1.
I already removed the IP for dev-2 from my known hosts, figuring it's cached somewhere, but no luck. What am I missing here?
Edit: I found out just now that the instances are separated though 'named' the same; if I login to dev-2, I do see myuser#live-1: in the shell, but it appears it is running a separate instance. I created a dummy file on the supposed dev-2, and it doesn't show up at the actual live-1 machine.
So this is very confusing; I rely on the 'user-tag' thing in front of every shell line to know where and what I'm actually working on; having two instances with the same name but different environments is confusing.
Ok, it was dead simple. Just run sudo hostname [desiredhostname] in the terminal, and restart it.
So in my case I logged in to dev-2 and ran sudo hostname dev-2.
Docker documentation is pretty good at describing what you can do from the command line.
It also gives a pretty comprehensive description of the commands associated with the remote API.
It does not, however, appear to give sufficient context for using the remote API to do things that one would do using the command line.
An example of what I am talking about: suppose you want to do a command like:
docker run --rm=true -i -t -v /home/user/resources:/files -p 8080:8080 --name SomeService myImage_v3
using the Remote API. There is a container "run" command in the Remote API:
POST /containers/(id or name)/start
And this command refers back to the create container command for the rather long list of JSON strings that you would need to add in order to do the actual start.
The problem here is: first, just calling this command doesn't work. Apparently there is more that you have to do (I am guessing you have to do a create, then a start). Second, it is unclear which JSON strings you need to use in order to do what I showed in the command line (like setting ports, mapping to the external directory, etc). Not only do the JSON strings provided in the remote API documentation not line up with the command line parameters (at least, not in any way that is obvious!), but it is unclear which JSON strings are required for the create (assuming that we have to do a create, which isn't established yet!) and which are required for the start.
This is just related to starting a container. Suppose you want to stop and destroy a container, as in:
docker stop SomeService
docker rm SomeService
Granted, there appear to be one- to- one commands for doing this in the remote API:
POST /containers/(id or name)/stop
POST /containers/(id or name)/kill
But it seems that the IDs you can pass them do not correspond to the IDs shown when you list containers or images.
Is there somewhere I can go to gather information on how to set up and use remote API commands that relates these commands and their JSON parameters to the commands and parameters in the command line?
Failing that, can someone please tell me how to do the start that I showed in my illustration using the remote API???
In any event: is there someone working on docker development I can bring these documentation issues to? It is, I believe, a big "hole" in their documentation.
Someone please advise...
docker run is a combination of docker create, followed by docker start, so https://docs.docker.com/engine/reference/api/docker_remote_api_v1.22/#create-a-container, followed by https://docs.docker.com/engine/reference/api/docker_remote_api_v1.22/#start-a-container
If you're running "interactively", you may need to attach to the container after that; https://docs.docker.com/engine/reference/api/docker_remote_api_v1.22/#attach-to-a-container
I have an ansible configuration which I know works on my local machines. However, I'm trying to now set it up on my company's machines which use a wrapper command similar to ssh (let's call it 'myssh')
for example, to access these machines, instead of writing
ssh myuser#123.123.123.123
you write
myssh myuser#123.123.123.123
which ends up calling ssh, among other things.
My question is, is there a way to swap which command ansible uses for accessing machines?
You can create a Connection Type Plugin to archive this. Looking at the ssh plugin, it appears like it might be as easy as replacing the ssh_cmd in line 333. Also specify myssh in line 69.
See here where to place the modified file. Additionally to that information, you can specify a custom location and let Ansible know about it in connection_plugins setting in ansible.cfg.
Finally again in your ansible.cfg set the transport setting to your new plugin:
transport = myssh
PS: I have never done anything like that before. This is only info from the docs.
Could you please point me what is the difference between installing openssh-server and starting a ssh session with a given docker container and running docker run -t -i ubuntu /bin/bash and then performing some operations. How does docker attach compare to those two methods?
Difference 1. If you want to use ssh, you need to have ssh installed on the Docker image and running on your container. You might not want to because of extra load or from a security perspective. One way to go is to keep your images as small as possible - avoids bugs like heartbleed ;). Whether you want ssh is a point of discussion, but mostly personal taste. I would say only use it for debugging, and not to actually change your image. If you would need the latter, you'd better make a new and better image. Personally, I have yet to install my first ssh server on a Docker image.
Difference 2. Using ssh you can start your container as specified by the CMD and maybe ENTRYPOINT in your Dockerfile. Ssh then allows you to inspect that container and run commands for whatever use case you might need. On the other hand, if you start your container with the bash command, you effectively overwrite your Dockerfile CMD. If you then want to test that CMD, you can still run it manually (probably as a background process). When debugging my images, I do that all the time. This is from a development point of view.
Difference 3. An extension of the 2nd, but from a different point of view. In production, ssh will always allow you to check out your running container. Docker has other options useful in this respect, like docker cp, docker logs and indeed docker attach.
According to the docs "The attach command will allow you to view or interact with any running container, detached (-d) or interactive (-i). You can attach to the same container at the same time - screen sharing style, or quickly view the progress of your daemonized process." However, I am having trouble in actually using this in a useful manner. Maybe someone who uses it could elaborate in that?
Those are the only essential differences. There is no difference for image layers, committing or anything like that.