Downloading PDF report from kibana/elasticsearch using API call - api

I am trying to generate PDF reports and download them using a script. I followed below instructions.
https://github.com/elastic/kibana/blob/master/docs/user/reporting/automating-report-generation.asciidoc
I am able to queue the report and i also got a download url ()/api/.../download/xyzdrfd but when i am trying wget on the url, It's not working. I have no idea how to download that report using APIs so just tried with wget.
Can anyone tell me how to download the reports from API call?

The download might not be happening due to some redirects happening on the page. Use -L option with curl command to get it working. I did it specifically using the Kibana endpoint to download a PDF file. Replace the username and passsword with the basic auth credentials of yours. Use -o option to specify the downloaded file name. Below is the complete example of the command:
curl -L -u username:password -o download.pdf https://endpoint.com:9244/s/bi-/api/reporting/jobs/download/ktl8n95q001edfc210feaz0r

Related

Do we need to install the Universal forwarder in the host(Log originating Server) for scripted inputs?

I need to forward some database related logs into splunk indexer using scripted inputs (Shell scripts)
My questions are :
1)Do I need to install the universal forwarder in the host side ?
2)Is there any other way rather than installing UF in host that we can extract the logs into indexer using scripted inputs?
3)In order to accomplish this what are the steps do I need to follow ?
1) To run a scripted input you need either a Universal Forwarder or a Heavy Forwarder. You'll need the HF to run a Python script.
2) See #Akah's answer.
3) See http://docs.splunk.com/Documentation/Forwarder/7.2.1/Forwarder/Abouttheuniversalforwarder
You can use the HTTP Event Collector which permits you to send data to the indexer via HTTP in JSON format.
There are examples to show you how to do via curl (and so by script) :
curl -k https://<host>:8088/services/collector -H 'Authorization: Splunk <token>' -d '{"sourcetype": "mysourcetype", "event":"Hello, World!"}'
You can follow the walkthrough too.

CLI command for Sonarqube Upgrade browser step

https://docs.sonarqube.org/display/SONAR/Upgrading
I am just going through this documentation to upgrade Sonarqube.
One of the steps is to open the URL in browser and follow instructions.
Is there any CLI command available for this step? So that I can automate this step in my upgrade automation?
Most (or even all?) UI interactions only trigger Web API calls.
In your case, api/system/migrate_db seems to serve your purpose.
From the api documentation:
Migrate the database to match the current version of SonarQube.
Sending a POST request to this URL starts the DB migration. It is
strongly advised to make a database backup before invoking this WS.
To call it from the command line use:
curl -s -u admin:admin -XPOST "localhost:9000/api/system/migrate_db"
curl is a linux command line tool for to communicate via HTTP
-s toggle "silent mode"
-u admin:admin provides authentication
-XPOST set's the HTTP method to POST (instead of default GET)

Pentaho | How to make CURL request in Pentaho

I have one URL. I want to log in into that URL and download the xml file from that URL link. File should be automatically downloaded. Like in PHP, we have CURL call which send request to URL and download the data.
Do we have anything like this in Pentaho?
I have gone through "HTTP Client" step but I am not getting how to use it to go and download particular file from URL.
Can someone please guide on this? Thanks in advance!
You can use Shell step in jobs
And put following into that step
For Linux,
cd 'location where you want file'
curl 'url'
For Windows
dir 'location where you want file'
curl 'url'

Can I use wget to click/execite a url link, specifically a url that includes a cgi command?

I need to activate a cgi command on a camera without using a browser, I'm not actually interested in downloading anything, the command just needs to be activated the same as when I just enter it into a browser.
Would wget do the trick or would it not actually execute the command, and try to download something, I've only ever tried to download files with wget.
Surely you can do it by using --spider option of wget.
For example:
wget --spider http://example.com

How to download all my caldav and carddav data with one wget / curl?

Until now, I used Google Calender and do my personal backup with a daily wget of the public ".ics"Link.
Now I want to switch to a new Service who has only caldavaccess.
Is there a possibility to download all my caldav and carddav data with one wget / curl?
This downloaded data should give me the possibility to backup lost data.
Thanks in advance.
edit
I created a very simple php file which works in the way hmh explained. Don't know if this way works for different providers, but for mailbox.org, it works well.
You can find it here https://gist.github.com/ahemwe/a2eaae4d56ac85969cf2.
Please be more specific, what is the new service/server you are using?
This is not specifically CalDAV, but most DAV servers still provide a way to grab all events/todos using a single GET. Usually by targeting the relevant collection with a GET, e.g. like either one of those:
curl -X GET -u login -H "Accept: text/calendar" https://myserver/joe/home/
curl -X GET -u login -H "Accept: text/calendar" https://myserver/joe/home.ics
In CalDAV/CardDAV you can grab the whole contents of a collection using a PROPFIND:
curl -X PROPFIND -u login -H "Content-Type: text/xml" -H "Depth: 1" \
--data "<propfind xmlns='DAV:'><prop><calendar-data xmlns='urn:ietf:params:xml:ns:caldav'/></prop></propfind>" \
https://myserver/joe/home/
Replace calendar-data with
<address-data xmlns="urn:ietf:params:xml:ns:carddav"/>
for CardDAV.
This will give you an XML entity which has the iCal/vCard contents embedded. To restore it, you would need to parse the XML and extract the data (not hard).
Note: Although plain standard, some servers reject that or just omit the content (lame! file bug reports ;-).
Specifically for people using Baïkal (>= 0.3.3; other Sabre/dav-based solutions will be similar), you can go directly to
https://<Baïkal location>/html/dav.php/
in a browser and get an html interface that allows you to download ics files, and so also allows you to find the right links for those for use with curl/wget.
I tried the accepted answer which did not work for me. With my CalDAV calendar provider I can, however, retrieve all calendar files using
wget -c -r -l 1 -nc --user='[myuser]' --password='[mypassword]' --accept=ics '[url]'
where [myuser] and [mypassword] are what you expect and [url] is the same URL as the one that you enter in your regular CalDAV software (as specified by your provider).
The command creates a directory containing all the ICS-files representing the calendar items. A similar command works for my addressbook.