How to download all my caldav and carddav data with one wget / curl? - backup

Until now, I used Google Calender and do my personal backup with a daily wget of the public ".ics"Link.
Now I want to switch to a new Service who has only caldavaccess.
Is there a possibility to download all my caldav and carddav data with one wget / curl?
This downloaded data should give me the possibility to backup lost data.
Thanks in advance.
edit
I created a very simple php file which works in the way hmh explained. Don't know if this way works for different providers, but for mailbox.org, it works well.
You can find it here https://gist.github.com/ahemwe/a2eaae4d56ac85969cf2.

Please be more specific, what is the new service/server you are using?
This is not specifically CalDAV, but most DAV servers still provide a way to grab all events/todos using a single GET. Usually by targeting the relevant collection with a GET, e.g. like either one of those:
curl -X GET -u login -H "Accept: text/calendar" https://myserver/joe/home/
curl -X GET -u login -H "Accept: text/calendar" https://myserver/joe/home.ics
In CalDAV/CardDAV you can grab the whole contents of a collection using a PROPFIND:
curl -X PROPFIND -u login -H "Content-Type: text/xml" -H "Depth: 1" \
--data "<propfind xmlns='DAV:'><prop><calendar-data xmlns='urn:ietf:params:xml:ns:caldav'/></prop></propfind>" \
https://myserver/joe/home/
Replace calendar-data with
<address-data xmlns="urn:ietf:params:xml:ns:carddav"/>
for CardDAV.
This will give you an XML entity which has the iCal/vCard contents embedded. To restore it, you would need to parse the XML and extract the data (not hard).
Note: Although plain standard, some servers reject that or just omit the content (lame! file bug reports ;-).

Specifically for people using Baïkal (>= 0.3.3; other Sabre/dav-based solutions will be similar), you can go directly to
https://<Baïkal location>/html/dav.php/
in a browser and get an html interface that allows you to download ics files, and so also allows you to find the right links for those for use with curl/wget.

I tried the accepted answer which did not work for me. With my CalDAV calendar provider I can, however, retrieve all calendar files using
wget -c -r -l 1 -nc --user='[myuser]' --password='[mypassword]' --accept=ics '[url]'
where [myuser] and [mypassword] are what you expect and [url] is the same URL as the one that you enter in your regular CalDAV software (as specified by your provider).
The command creates a directory containing all the ICS-files representing the calendar items. A similar command works for my addressbook.

Related

Downloading PDF report from kibana/elasticsearch using API call

I am trying to generate PDF reports and download them using a script. I followed below instructions.
https://github.com/elastic/kibana/blob/master/docs/user/reporting/automating-report-generation.asciidoc
I am able to queue the report and i also got a download url ()/api/.../download/xyzdrfd but when i am trying wget on the url, It's not working. I have no idea how to download that report using APIs so just tried with wget.
Can anyone tell me how to download the reports from API call?
The download might not be happening due to some redirects happening on the page. Use -L option with curl command to get it working. I did it specifically using the Kibana endpoint to download a PDF file. Replace the username and passsword with the basic auth credentials of yours. Use -o option to specify the downloaded file name. Below is the complete example of the command:
curl -L -u username:password -o download.pdf https://endpoint.com:9244/s/bi-/api/reporting/jobs/download/ktl8n95q001edfc210feaz0r

Is there an API for fetching stopped containers list

I need to get all the stopped containers list via an API. but I got only commands to get the list.
If APIs are not available, suggest how we can create an API with docker commands. so whenever I hit the API, I can get the list of stopped containers.
First, if you need other pc to visit docker daemon you need to enable it in /lib/systemd/system/docker.service, like next:
ExecStart=/usr/bin/dockerd -H fd:// -H tcp://0.0.0.0:2375
Second, you could use next url to paste to browser to get all containers like exit:
http://10.192.244.188:2375/containers/json?filters={"status":["exited"]}
If use curl, then you may need to url encode some special html entity like next:
curl http://10.192.244.188:2375/containers/json?filters=%7B%22status%22%3A%5B%22exited%22%5D%7D
You could also use next to make it easy to read:
curl http://10.192.244.188:2375/containers/json?filters=%7B%22status%22%3A%5B%22exited%22%5D%7D | python -m json.tool

Do we need to install the Universal forwarder in the host(Log originating Server) for scripted inputs?

I need to forward some database related logs into splunk indexer using scripted inputs (Shell scripts)
My questions are :
1)Do I need to install the universal forwarder in the host side ?
2)Is there any other way rather than installing UF in host that we can extract the logs into indexer using scripted inputs?
3)In order to accomplish this what are the steps do I need to follow ?
1) To run a scripted input you need either a Universal Forwarder or a Heavy Forwarder. You'll need the HF to run a Python script.
2) See #Akah's answer.
3) See http://docs.splunk.com/Documentation/Forwarder/7.2.1/Forwarder/Abouttheuniversalforwarder
You can use the HTTP Event Collector which permits you to send data to the indexer via HTTP in JSON format.
There are examples to show you how to do via curl (and so by script) :
curl -k https://<host>:8088/services/collector -H 'Authorization: Splunk <token>' -d '{"sourcetype": "mysourcetype", "event":"Hello, World!"}'
You can follow the walkthrough too.

Artifactory REST API to modify Build Info json

I need to change the BuildUser name (Principal) in BuildInfo json while publishing package to the Artifactory.
Kindly let me know if there is any REST API(PUT/POST) available to update the user details in Buildinfo.
Thanks,
Builds are supposed to be immutable, so there is no way to modify one. If you really need to do this, the closest you can get is deleting and re-deploying the existing build info:
curl -uuser:pass -XGET 'http://localhost:8081/artifactory/api/build/foobar/10' >build.json
curl -uuser:pass -XDELETE 'http://localhost:8081/artifactory/api/build/foobar?buildNumbers=10'
curl -uuser:pass -XPUT 'http://localhost:8081/artifactory/api/build' -H 'Content-Type: application/json' -T build.json
This should re-deploy the build exactly as it already is, except that Artifactory will overwrite the principal field with the current user (so make sure you run these as the user you want the principal to be set to). By default, the DELETE will only delete the build info, and not the build artifacts.
If you're looking to deploy a build with a different principal from the user you're deploying as, I don't think that's possible.

Can't upload file to a server using curl

I'm trying to upload a file to a server using curl. It should be uploaded a binary file regardless the format it is in. But I'm having an error:
curl -d #/home/alex/123.log localhost:9000/myupload/
The error (the warning, actually) is
Warning: Couldn't read data from file "123.log", this makes an empty POST.
P.S. Shouldn't I use --data-binary instead of -d? I didn't find any documentation for --data-binary.
Copied from Curl's document:
-d, --data is the same as --data-ascii. To post data purely binary, you should instead use the --data-binary option. To URL-encode the value of a form field you may use --data-urlencode.
More about the --data-binary parameter.
Note: Make sure you have permission to access the file that you wanted to upload to the sever.