RavenDb Documentation shows how to backup a RavenDb database using
curl -X POST http://localhost:8080/admin/backup -d "{ 'BackupLocation': 'C:\Backups\2010-05-06' }"
But how can I backup a specific database using the HTTP API?
Tennants just become a sub-uri, so it would be http://localhost:8080/[TENNANT]/admin/backup
Personally I would use smuggler or rely on shadow copies.
Related
I need to forward some database related logs into splunk indexer using scripted inputs (Shell scripts)
My questions are :
1)Do I need to install the universal forwarder in the host side ?
2)Is there any other way rather than installing UF in host that we can extract the logs into indexer using scripted inputs?
3)In order to accomplish this what are the steps do I need to follow ?
1) To run a scripted input you need either a Universal Forwarder or a Heavy Forwarder. You'll need the HF to run a Python script.
2) See #Akah's answer.
3) See http://docs.splunk.com/Documentation/Forwarder/7.2.1/Forwarder/Abouttheuniversalforwarder
You can use the HTTP Event Collector which permits you to send data to the indexer via HTTP in JSON format.
There are examples to show you how to do via curl (and so by script) :
curl -k https://<host>:8088/services/collector -H 'Authorization: Splunk <token>' -d '{"sourcetype": "mysourcetype", "event":"Hello, World!"}'
You can follow the walkthrough too.
I am looking for a persistent key DB which can be accessed via HTTP. I need to use it for storing postman test script data. I have heard of rocksdb and leveldb, but I am not sure whether they can be accessed via HTTP.
leveldb and rocksdb don't have a network component.
I created a small python project that does expose a document datastore like API that you can query using REST. Have a look at it https://github.com/amirouche/deuspy. It rely on leveldb for persistence.
There is a python asyncio client. You can create a client on your own it's very easy.
To get started, you can simply do the following:
pip3 install deuspy
python3 -m deuspy.server
And then start querying.
Here is an example curl-based session:
$ curl -X GET http://localhost:9990
{}
$ curl -X POST --data '{"héllo": "world"}' http://localhost:9990
3252169150753703489
$ $ curl -X GET http://localhost:9990/3252169150753703489
{"h\u00e9llo": "world"}
You can also filter documents. Look at how is implemented the asyncio client.
Take a look at Webdis which provides HTTP REST API access to Redis key value store. Redis has very good performance and scalability.
https://docs.sonarqube.org/display/SONAR/Upgrading
I am just going through this documentation to upgrade Sonarqube.
One of the steps is to open the URL in browser and follow instructions.
Is there any CLI command available for this step? So that I can automate this step in my upgrade automation?
Most (or even all?) UI interactions only trigger Web API calls.
In your case, api/system/migrate_db seems to serve your purpose.
From the api documentation:
Migrate the database to match the current version of SonarQube.
Sending a POST request to this URL starts the DB migration. It is
strongly advised to make a database backup before invoking this WS.
To call it from the command line use:
curl -s -u admin:admin -XPOST "localhost:9000/api/system/migrate_db"
curl is a linux command line tool for to communicate via HTTP
-s toggle "silent mode"
-u admin:admin provides authentication
-XPOST set's the HTTP method to POST (instead of default GET)
I have a question about API's and cURL. I'm not sure if this is all Python, but I am trying to access JSON data using an API, but the server isn't as easy as grabbing the data with an XMLRequest... The support team gave me this line of code:
curl -k -s --data "api_id=xxxx&api_key=xxxx&time_range=today&site_id=xxxxx"
https://my.incapsula.com/api/stats/v1
And I have no idea what this even means because all the API requests I've been making was just as easy as using a link and parsing through it with some JavaScript. Can anyone break the -k -s --data for me or point me in a right tutorial?
(NOT PYTHON; Sorry guys...)
The right tutorial is the man page.
-k/--insecure
(SSL) This option explicitly allows curl to perform "insecure" SSL connections and transfers. All SSL connections are attempted to be made secure by using the CA certificate bundle installed by default. This makes all connections considered "insecure" fail unless -k/--insecure is used.
See this online resource for further details: http://curl.haxx.se/docs/sslcerts.html
-s/--silent
Silent or quiet mode. Don't show progress meter or error messages. Makes Curl mute.
As for --data, well, it specify the data you are sending to the server.
This question is (for now) not related to Python at all, but eventually to shell scripting.
Until now, I used Google Calender and do my personal backup with a daily wget of the public ".ics"Link.
Now I want to switch to a new Service who has only caldavaccess.
Is there a possibility to download all my caldav and carddav data with one wget / curl?
This downloaded data should give me the possibility to backup lost data.
Thanks in advance.
edit
I created a very simple php file which works in the way hmh explained. Don't know if this way works for different providers, but for mailbox.org, it works well.
You can find it here https://gist.github.com/ahemwe/a2eaae4d56ac85969cf2.
Please be more specific, what is the new service/server you are using?
This is not specifically CalDAV, but most DAV servers still provide a way to grab all events/todos using a single GET. Usually by targeting the relevant collection with a GET, e.g. like either one of those:
curl -X GET -u login -H "Accept: text/calendar" https://myserver/joe/home/
curl -X GET -u login -H "Accept: text/calendar" https://myserver/joe/home.ics
In CalDAV/CardDAV you can grab the whole contents of a collection using a PROPFIND:
curl -X PROPFIND -u login -H "Content-Type: text/xml" -H "Depth: 1" \
--data "<propfind xmlns='DAV:'><prop><calendar-data xmlns='urn:ietf:params:xml:ns:caldav'/></prop></propfind>" \
https://myserver/joe/home/
Replace calendar-data with
<address-data xmlns="urn:ietf:params:xml:ns:carddav"/>
for CardDAV.
This will give you an XML entity which has the iCal/vCard contents embedded. To restore it, you would need to parse the XML and extract the data (not hard).
Note: Although plain standard, some servers reject that or just omit the content (lame! file bug reports ;-).
Specifically for people using Baïkal (>= 0.3.3; other Sabre/dav-based solutions will be similar), you can go directly to
https://<Baïkal location>/html/dav.php/
in a browser and get an html interface that allows you to download ics files, and so also allows you to find the right links for those for use with curl/wget.
I tried the accepted answer which did not work for me. With my CalDAV calendar provider I can, however, retrieve all calendar files using
wget -c -r -l 1 -nc --user='[myuser]' --password='[mypassword]' --accept=ics '[url]'
where [myuser] and [mypassword] are what you expect and [url] is the same URL as the one that you enter in your regular CalDAV software (as specified by your provider).
The command creates a directory containing all the ICS-files representing the calendar items. A similar command works for my addressbook.