I have a server running several docker containers. I wanted to know about network usage of individual and some specific docker containers.
I wad able to get entire traffic using below api
http://<server-ip>:19999/api/v1/data?chart=net.docker0&after=-60&before=0&points=1&group=median>ime=0&format=json&options=seconds&options=jsonwrap
I goes through documentations and didn't find anything helpfull.
Usually, the issue with such questions comes from Netdata not being granted the access required to identify the docker container.
I'd take a look at https://learn.netdata.cloud/docs/agent/packaging/docker/#docker-container-names-resolution and go through https://github.com/netdata/netdata/issues/6882 as well.
If you know the specific container name then you should i think just be able to pull the data direct from the container specific chart.
For example i have a container called airbyte-webapp and so i can get its network usage via its own specific chart at /api/v1/data?chart=cgroup_airbyte-webapp.net_eth0
Related
I have 4 containers running in the same Docker network
mongodb
our api server
a selenium server
our tests themselves
I get this error from our test container:
WebDriverError: File not found: /root/cdt-tests/csv-data/IT-DE-Jasper.csv
However, from my test logs, this file totally exists...in the test container.
The problem, I think, is that the selenium server is not looking at the same filesystem as our test container, because they are running in different containers.
What is the best way to solve this problem?
a. Should I try to run them in the same container?
b. Can I somehow get them to share the same filesystem?
c. ?
Just because you are sharing the network doesn't mean you are sharing the volumes, see how to do that In Docker, how can I share files between containers and then save them to an image? as #alex-blex suggested
You may be able to connect the containers using a user-defined network, as explained on Docker's site
If you've already done that, it might be an issue with the path to the file you're using in your test. Perhaps it wants an absolute path, because the containers are considered different entities on the Docker network
I am using a GitLab docker image for integration testing of a service I'm helping to develop. Ideally, the image would be a preconfigured snapshot of GitLab with different users and repos available to run tests against. So the problem ends up being, what is a good way to automate the creation of 'snapshots' of GitLab (that can then be versioned etc.)?
My current solution to this problem is to use GitLab's built in backup utility via gitlab-rake gitlab:backup:create after getting GitLab to a state that I want. This then lets me use GitLab's gitlab-rake gitlab:backup:restore in a hook when the container is starting up to get the container back to the state that I expect (the backup having been ADDed in the Dockerfile for the image). This has the advantage of being relatively lightweight (backups are on the order of MBs) and the backups can be checked in to version control.
I have tried using docker export along with docker import to save the state of the container and then create an image based on that state. This has the advantage of being easy to automate since it is directly supported by Docker, but ends up being fairly expensive considering what the goal is (having users and repos available to test against). It also would require the images to be pushed to a registry of some kind in order to be easily distributed. Perhaps this is the best solution because it is well supported though.
I suppose my question is, what is the Docker way of approaching a problem like this?
I'm building a web application that needs to allow users to upload profile pictures. I want the application to be self-contained, so that people don't need to have an s3 or other cloud storage service account.
It's best to keep docker containers as disposable as possible, so I guess I should create a volume. So I want the volume to be created automatically, so people don't have to specify a volume when running the container, but the documentation for the VOLUME instruction in dockerfiles confuses me.
The VOLUME instruction creates a mount point with the specified name and marks it as holding externally mounted volumes from native host or other containers.
What does it mean to be marked as such? The data is to be written by the application, it's not coming from an extenrl source.
When you mark a volume in the dockerfile, say VOLUME /site/uploads,it makes it very easy to later run another container with --volumes-from <container-name> and have /site/uploads available in the new container with all the data that has been written and that will be written (if the first container is still running).
Also, you'll be able to see that volume with docker volume ls after you start the container the first time.
The only problem that you might have if you delete the container, is that you will lose the mapping provided by docker inspect <container-name> that tells you which volume your container created. To see the volume your container created really clearly and quickly, try docker inspect <container-name> | jq '.[].Mounts' if you have jq installed. Otherwise, docker inspect <container-name> | grep Mounts -A 10 might be enough when you only have one volume. (you can also just wade through all the json yourself)
Even if you remove the container that created the volume, the volume will remain on your system, viewable with docker volume ls unless you run docker volume rm <volume-name>
Note: I'm using docker version 1.10.3
You will not have problems with that, the images will be uploaded to the mounted filesystem without problems.
Maybe you have to specify free permissions to the uploads folder so that you can write on it.
I use python to create a custom mininet topology. To know the topology in detail is not important for the question.
I use ryu as controller. Especially I use the app "ofctl_rest.py". This controller does not install rules in the switch on its own. You have to issue rest - commands to establish rules. In every rest request (rule) you have to specify an outgoing port. To specify this port I need information about the topology of the network.
I need to know which link is connected to a port. I need to know which interface the port runs on. Also helpful would be to know the foreign interface, foreign switch/host, and foraign port of the actual port. How can I retrieve this information???
Please help me. I am really frustrated right now, because I do not know how to figure it out.
Inside the mininet CLI you can use the net command to find out about the topology. The nodes command will show you a list of nodes.
You can also use the dump command to display the interface details.
For information on the 'hosts', such as they are, you can run normal linux commands on each host, e.g.
mn> h1 ifconfig
will run ifconfig on host h1, showing you some of the network configuration for that host.
Given that you seem to be running mininet from a custom script, you could start the CLI at the end of your script (if that's possible) e.g.
net = Mininet(your_topo)
net.start()
CLI(net)
net.stop()
Otherwise, you can use the mininet python APIs to find much of the information.
the dump* functions in mininet.util will print out lots of information.
topo.links() will give you a list of the links in the topology.
topo.linkinfo() might give you some extra info.
For flow information you can either run ovs-dpctl, ovs-ofctl etc. outside of mininet (in a normal shell), or run the equivalents without the ovs- prefix inside the mininet CLI.
I'm using the Create Virtual Machine Deployment method of the Azure REST API: http://msdn.microsoft.com/en-us/library/windowsazure/jj157194.aspx
I'm trying to use an image sourced from the VM Depot, with a path such as this:
http://vmdepotwestus.blob.core.windows.net/linux-community-store/community-4-d803ca0a-5d98-4be8-8895-2a9d15ec3974-1.vhd
I am currently getting the following error:
The virtual machine image source is not valid.
I am assuming there is some process which first needs to be completed in order to make that image available to the specific API user, but I can't seem to work out what?
You can't deploy directly from VM Depot. You must first copy the image to your own storage account. There are instructions on the VM Depot help page for doing this via the Azure Management portal (see http://vmdepot.msopentech.com/Help/Help.cshtml#deployingUsingAUX). It can also be done via the CLI tools, see http://www.windowsazure.com/en-us/manage/install-and-configure-cli/#use
It is more complicated. You have to download the VHD from some link in VM Depot in your storage, create the image, then provision the machine.
The command line of VMdepot to do that is in node, so you can very easily reverse it to see how it works.
On the other hand, I do it also with IaaS Management Studio, you might be able to take a look with reflector how I did.