Is there an API for fetching stopped containers list - api

I need to get all the stopped containers list via an API. but I got only commands to get the list.
If APIs are not available, suggest how we can create an API with docker commands. so whenever I hit the API, I can get the list of stopped containers.

First, if you need other pc to visit docker daemon you need to enable it in /lib/systemd/system/docker.service, like next:
ExecStart=/usr/bin/dockerd -H fd:// -H tcp://0.0.0.0:2375
Second, you could use next url to paste to browser to get all containers like exit:
http://10.192.244.188:2375/containers/json?filters={"status":["exited"]}
If use curl, then you may need to url encode some special html entity like next:
curl http://10.192.244.188:2375/containers/json?filters=%7B%22status%22%3A%5B%22exited%22%5D%7D
You could also use next to make it easy to read:
curl http://10.192.244.188:2375/containers/json?filters=%7B%22status%22%3A%5B%22exited%22%5D%7D | python -m json.tool

Related

Do we need to install the Universal forwarder in the host(Log originating Server) for scripted inputs?

I need to forward some database related logs into splunk indexer using scripted inputs (Shell scripts)
My questions are :
1)Do I need to install the universal forwarder in the host side ?
2)Is there any other way rather than installing UF in host that we can extract the logs into indexer using scripted inputs?
3)In order to accomplish this what are the steps do I need to follow ?
1) To run a scripted input you need either a Universal Forwarder or a Heavy Forwarder. You'll need the HF to run a Python script.
2) See #Akah's answer.
3) See http://docs.splunk.com/Documentation/Forwarder/7.2.1/Forwarder/Abouttheuniversalforwarder
You can use the HTTP Event Collector which permits you to send data to the indexer via HTTP in JSON format.
There are examples to show you how to do via curl (and so by script) :
curl -k https://<host>:8088/services/collector -H 'Authorization: Splunk <token>' -d '{"sourcetype": "mysourcetype", "event":"Hello, World!"}'
You can follow the walkthrough too.

Need Persistent key value store which can be accessed via http

I am looking for a persistent key DB which can be accessed via HTTP. I need to use it for storing postman test script data. I have heard of rocksdb and leveldb, but I am not sure whether they can be accessed via HTTP.
leveldb and rocksdb don't have a network component.
I created a small python project that does expose a document datastore like API that you can query using REST. Have a look at it https://github.com/amirouche/deuspy. It rely on leveldb for persistence.
There is a python asyncio client. You can create a client on your own it's very easy.
To get started, you can simply do the following:
pip3 install deuspy
python3 -m deuspy.server
And then start querying.
Here is an example curl-based session:
$ curl -X GET http://localhost:9990
{}
$ curl -X POST --data '{"héllo": "world"}' http://localhost:9990
3252169150753703489
$ $ curl -X GET http://localhost:9990/3252169150753703489
{"h\u00e9llo": "world"}
You can also filter documents. Look at how is implemented the asyncio client.
Take a look at Webdis which provides HTTP REST API access to Redis key value store. Redis has very good performance and scalability.

CLI command for Sonarqube Upgrade browser step

https://docs.sonarqube.org/display/SONAR/Upgrading
I am just going through this documentation to upgrade Sonarqube.
One of the steps is to open the URL in browser and follow instructions.
Is there any CLI command available for this step? So that I can automate this step in my upgrade automation?
Most (or even all?) UI interactions only trigger Web API calls.
In your case, api/system/migrate_db seems to serve your purpose.
From the api documentation:
Migrate the database to match the current version of SonarQube.
Sending a POST request to this URL starts the DB migration. It is
strongly advised to make a database backup before invoking this WS.
To call it from the command line use:
curl -s -u admin:admin -XPOST "localhost:9000/api/system/migrate_db"
curl is a linux command line tool for to communicate via HTTP
-s toggle "silent mode"
-u admin:admin provides authentication
-XPOST set's the HTTP method to POST (instead of default GET)

Mappings between Docker Remote API and its command line client

Docker documentation is pretty good at describing what you can do from the command line.
It also gives a pretty comprehensive description of the commands associated with the remote API.
It does not, however, appear to give sufficient context for using the remote API to do things that one would do using the command line.
An example of what I am talking about: suppose you want to do a command like:
docker run --rm=true -i -t -v /home/user/resources:/files -p 8080:8080 --name SomeService myImage_v3
using the Remote API. There is a container "run" command in the Remote API:
POST /containers/(id or name)/start
And this command refers back to the create container command for the rather long list of JSON strings that you would need to add in order to do the actual start.
The problem here is: first, just calling this command doesn't work. Apparently there is more that you have to do (I am guessing you have to do a create, then a start). Second, it is unclear which JSON strings you need to use in order to do what I showed in the command line (like setting ports, mapping to the external directory, etc). Not only do the JSON strings provided in the remote API documentation not line up with the command line parameters (at least, not in any way that is obvious!), but it is unclear which JSON strings are required for the create (assuming that we have to do a create, which isn't established yet!) and which are required for the start.
This is just related to starting a container. Suppose you want to stop and destroy a container, as in:
docker stop SomeService
docker rm SomeService
Granted, there appear to be one- to- one commands for doing this in the remote API:
POST /containers/(id or name)/stop
POST /containers/(id or name)/kill
But it seems that the IDs you can pass them do not correspond to the IDs shown when you list containers or images.
Is there somewhere I can go to gather information on how to set up and use remote API commands that relates these commands and their JSON parameters to the commands and parameters in the command line?
Failing that, can someone please tell me how to do the start that I showed in my illustration using the remote API???
In any event: is there someone working on docker development I can bring these documentation issues to? It is, I believe, a big "hole" in their documentation.
Someone please advise...
docker run is a combination of docker create, followed by docker start, so https://docs.docker.com/engine/reference/api/docker_remote_api_v1.22/#create-a-container, followed by https://docs.docker.com/engine/reference/api/docker_remote_api_v1.22/#start-a-container
If you're running "interactively", you may need to attach to the container after that; https://docs.docker.com/engine/reference/api/docker_remote_api_v1.22/#attach-to-a-container

How to download all my caldav and carddav data with one wget / curl?

Until now, I used Google Calender and do my personal backup with a daily wget of the public ".ics"Link.
Now I want to switch to a new Service who has only caldavaccess.
Is there a possibility to download all my caldav and carddav data with one wget / curl?
This downloaded data should give me the possibility to backup lost data.
Thanks in advance.
edit
I created a very simple php file which works in the way hmh explained. Don't know if this way works for different providers, but for mailbox.org, it works well.
You can find it here https://gist.github.com/ahemwe/a2eaae4d56ac85969cf2.
Please be more specific, what is the new service/server you are using?
This is not specifically CalDAV, but most DAV servers still provide a way to grab all events/todos using a single GET. Usually by targeting the relevant collection with a GET, e.g. like either one of those:
curl -X GET -u login -H "Accept: text/calendar" https://myserver/joe/home/
curl -X GET -u login -H "Accept: text/calendar" https://myserver/joe/home.ics
In CalDAV/CardDAV you can grab the whole contents of a collection using a PROPFIND:
curl -X PROPFIND -u login -H "Content-Type: text/xml" -H "Depth: 1" \
--data "<propfind xmlns='DAV:'><prop><calendar-data xmlns='urn:ietf:params:xml:ns:caldav'/></prop></propfind>" \
https://myserver/joe/home/
Replace calendar-data with
<address-data xmlns="urn:ietf:params:xml:ns:carddav"/>
for CardDAV.
This will give you an XML entity which has the iCal/vCard contents embedded. To restore it, you would need to parse the XML and extract the data (not hard).
Note: Although plain standard, some servers reject that or just omit the content (lame! file bug reports ;-).
Specifically for people using Baïkal (>= 0.3.3; other Sabre/dav-based solutions will be similar), you can go directly to
https://<Baïkal location>/html/dav.php/
in a browser and get an html interface that allows you to download ics files, and so also allows you to find the right links for those for use with curl/wget.
I tried the accepted answer which did not work for me. With my CalDAV calendar provider I can, however, retrieve all calendar files using
wget -c -r -l 1 -nc --user='[myuser]' --password='[mypassword]' --accept=ics '[url]'
where [myuser] and [mypassword] are what you expect and [url] is the same URL as the one that you enter in your regular CalDAV software (as specified by your provider).
The command creates a directory containing all the ICS-files representing the calendar items. A similar command works for my addressbook.