Need Persistent key value store which can be accessed via http - redis

I am looking for a persistent key DB which can be accessed via HTTP. I need to use it for storing postman test script data. I have heard of rocksdb and leveldb, but I am not sure whether they can be accessed via HTTP.

leveldb and rocksdb don't have a network component.
I created a small python project that does expose a document datastore like API that you can query using REST. Have a look at it https://github.com/amirouche/deuspy. It rely on leveldb for persistence.
There is a python asyncio client. You can create a client on your own it's very easy.
To get started, you can simply do the following:
pip3 install deuspy
python3 -m deuspy.server
And then start querying.
Here is an example curl-based session:
$ curl -X GET http://localhost:9990
{}
$ curl -X POST --data '{"héllo": "world"}' http://localhost:9990
3252169150753703489
$ $ curl -X GET http://localhost:9990/3252169150753703489
{"h\u00e9llo": "world"}
You can also filter documents. Look at how is implemented the asyncio client.

Take a look at Webdis which provides HTTP REST API access to Redis key value store. Redis has very good performance and scalability.

Related

Is there an API for fetching stopped containers list

I need to get all the stopped containers list via an API. but I got only commands to get the list.
If APIs are not available, suggest how we can create an API with docker commands. so whenever I hit the API, I can get the list of stopped containers.
First, if you need other pc to visit docker daemon you need to enable it in /lib/systemd/system/docker.service, like next:
ExecStart=/usr/bin/dockerd -H fd:// -H tcp://0.0.0.0:2375
Second, you could use next url to paste to browser to get all containers like exit:
http://10.192.244.188:2375/containers/json?filters={"status":["exited"]}
If use curl, then you may need to url encode some special html entity like next:
curl http://10.192.244.188:2375/containers/json?filters=%7B%22status%22%3A%5B%22exited%22%5D%7D
You could also use next to make it easy to read:
curl http://10.192.244.188:2375/containers/json?filters=%7B%22status%22%3A%5B%22exited%22%5D%7D | python -m json.tool

Do we need to install the Universal forwarder in the host(Log originating Server) for scripted inputs?

I need to forward some database related logs into splunk indexer using scripted inputs (Shell scripts)
My questions are :
1)Do I need to install the universal forwarder in the host side ?
2)Is there any other way rather than installing UF in host that we can extract the logs into indexer using scripted inputs?
3)In order to accomplish this what are the steps do I need to follow ?
1) To run a scripted input you need either a Universal Forwarder or a Heavy Forwarder. You'll need the HF to run a Python script.
2) See #Akah's answer.
3) See http://docs.splunk.com/Documentation/Forwarder/7.2.1/Forwarder/Abouttheuniversalforwarder
You can use the HTTP Event Collector which permits you to send data to the indexer via HTTP in JSON format.
There are examples to show you how to do via curl (and so by script) :
curl -k https://<host>:8088/services/collector -H 'Authorization: Splunk <token>' -d '{"sourcetype": "mysourcetype", "event":"Hello, World!"}'
You can follow the walkthrough too.

CLI command for Sonarqube Upgrade browser step

https://docs.sonarqube.org/display/SONAR/Upgrading
I am just going through this documentation to upgrade Sonarqube.
One of the steps is to open the URL in browser and follow instructions.
Is there any CLI command available for this step? So that I can automate this step in my upgrade automation?
Most (or even all?) UI interactions only trigger Web API calls.
In your case, api/system/migrate_db seems to serve your purpose.
From the api documentation:
Migrate the database to match the current version of SonarQube.
Sending a POST request to this URL starts the DB migration. It is
strongly advised to make a database backup before invoking this WS.
To call it from the command line use:
curl -s -u admin:admin -XPOST "localhost:9000/api/system/migrate_db"
curl is a linux command line tool for to communicate via HTTP
-s toggle "silent mode"
-u admin:admin provides authentication
-XPOST set's the HTTP method to POST (instead of default GET)

cURL API Commands

I have a question about API's and cURL. I'm not sure if this is all Python, but I am trying to access JSON data using an API, but the server isn't as easy as grabbing the data with an XMLRequest... The support team gave me this line of code:
curl -k -s --data "api_id=xxxx&api_key=xxxx&time_range=today&site_id=xxxxx"
https://my.incapsula.com/api/stats/v1
And I have no idea what this even means because all the API requests I've been making was just as easy as using a link and parsing through it with some JavaScript. Can anyone break the -k -s --data for me or point me in a right tutorial?
(NOT PYTHON; Sorry guys...)
The right tutorial is the man page.
-k/--insecure
(SSL) This option explicitly allows curl to perform "insecure" SSL connections and transfers. All SSL connections are attempted to be made secure by using the CA certificate bundle installed by default. This makes all connections considered "insecure" fail unless -k/--insecure is used.
See this online resource for further details: http://curl.haxx.se/docs/sslcerts.html
-s/--silent
Silent or quiet mode. Don't show progress meter or error messages. Makes Curl mute.
As for --data, well, it specify the data you are sending to the server.
This question is (for now) not related to Python at all, but eventually to shell scripting.

RavenDb backup using HTTP API and multi-tenancy

RavenDb Documentation shows how to backup a RavenDb database using
curl -X POST http://localhost:8080/admin/backup -d "{ 'BackupLocation': 'C:\Backups\2010-05-06' }"
But how can I backup a specific database using the HTTP API?
Tennants just become a sub-uri, so it would be http://localhost:8080/[TENNANT]/admin/backup
Personally I would use smuggler or rely on shadow copies.