Tableau has some REST API calls
Question 1: Does anyone know how to use the following call to download an online workbook? Sample code will be appreciated.
Question 2: Does any one know how to read and parse this twbx file?
Thanks.
You will need to send a GET request to a URL such as
https://YOURDOMAIN/api/2.0/sites/SITE-LUID/workbooks/WORKBOOK-LUID/content
You also need to send a header of your authorisation token like which is named X-Tableau-Auth
I suggest getting chrome and postman installed to test this kind of stuff out. Importing https://github.com/TableauExamples/Tableau_Postman as a collection will help
Related
I need to create an automated test-setup for some webservies, and plan to use SoapUI or Postman for that. My question is pretty basic. What happnds to the data after a request is made?
E.g. if the response contains data from a system, and display it in the Postman UI, will Postman store the response? Or what will happnd to it after the request?
I'm asking for security purpose and I was not able to find a concrete answer myself. Thank you in advance.
Postman provides us the explicit ways to store data or not. When you try to run a collection then in the settings we can specify if we want to store responses, cookies, etc or not. Configure it as per your need.
As per the official site
"Postman does not track any content of your requests/responses."
Under File--> settings
You can even avoid using the cloud version if you don't want to sync up things
Re SoapUI...
If you call a service once, then the data remains in the UI. If you run a second or third time, then only the last response is shown in he UI.
Once you close SoapUI, the request and response data is gone.
However, you can save the data from every request and response by using a datasink step, should that be what you want.
What would be the simplest tool/editor (ideally for Mac) to run web API queries (Stateless RESTful web API) in a loop in order to store Json results in a file ?
Very simple basically, trying just to automate the following :
- a first call to get a list of IDs
- then for each each ID, doing a call to get a few values related to this ID. Values are returned in a Json file, I would like to store them in a file (csv or excel)
To test the queries, I've used "Advanced REST client" to set a request with my authentication information header and do a few API queries tests, it works well but now I basically want to create a script to get the whole set of data which is returned and save in a file. With the idea to run this script from time to time. You can't to that with "Advanced REST client", right?
Sorry it's not (yet!) a super advanced question but any help would be greatly appreciated.
You may try Postman - definitely works on (accursed) Mac
There is bit of a confusion, wondering if somebody can help me with this.
Here is an example which is a year old and uses goapp with polymer and endpoint
https://github.com/googlesamples/cloud-polymer-go
Here is a recent example using gcloud
https://github.com/GoogleCloudPlatform/golang-samples/tree/master/appengine_flexible/endpoints
Both are different as google changed its approach.
As per second example, I am able to create endpoint functions which uses json for input and output. However there are 2 problems
1st. It is throwing error if I put functions in different file under same package. It works when I run go run .go. but then I dont understand how app.yaml come into picture. I think this url /_ah/spi/. should work
2nd. I am using postman app to check the request and response from endpoint. Is there a better way? I thought google does provide a platform to test endpoint.
Is there any example which implements examples similar to 1st one with new libraries?
looking forward for your help. Thanks.
I have a Google spreadsheet published to the web, and I'm trying to read and write list-based feeds. I have not authenticated.
I can read data using a GET request just fine.
curl https://spreadsheets.google.com/feeds/list/1WngGKZHauwBqRxgC5E6DFMiJ-J8BDsadfwerF4RE0M/1/public/full
However, all my post requests to write data come back with an error
curl --data '' https://spreadsheets.google.com/feeds/list/1WngGKZHauwBqRxgC5E6DFMiJ-J8BDlN3mMaBfF4RE0M/1/public/full
curl --data '<entry xmlns="http://www.w3.org/2005/Atom" xmlns:gsx="http://schemas.google.com/spreadsheets/2006/extended"><gsx:id>1</gsx:id><gsx:status>1</gsx:status><gsx:user_email>1</gsx:user_email><gsx:url domain>1</gsx:url domain><gsx:highlighted_text>1</gsx:highlighted_text><gsx:complete_text comments>1</gsx:complete_text comments><gsx:created_date>1</gsx:created_date><gsx:updated_date>1</gsx:updated_date><gsx:created_by>1</gsx:created_by><gsx:updated_by>1</gsx:updated_by></entry>' https://spreadsheets.google.com/feeds/list/1WngGKZHauwBqRxgC5E6DFMiJ-J8BDlN3mMaBfF4RE0M/1/public/full
The error I'm getting is: "Sorry, the file you have requested does not exist. Please check the address and try again."
Do POST request require authentication? I can't find anything in the docs that say so.
Any help would be appreciated. Thanks.
Two things:
I think you need to use the private feed (.../private/full), not the public one (.../public/full) like you're using now. More on the private-versus-public distinction is in the API docs.
You will need an Authorization header. (That process gets pretty complicated, but there's a lot of good info here. Remember to select the appropriate scopes.)
Both are mentioned in the example about adding a list row in the Sheets API docs. (I'd link directly, but I don't have enough rep to add more than two links. Just search for "Adding a list row".)
What I'm trying to achieve is something similar to an Add-on called Live Http Headers used with Firefox. I'm not trying to get the Headers or cookies, but the links that load on the page itself. Let us assume I visited Mail.Yahoo.com, this is pretty much what you would see when I use the add-on.
CLICK HERE
How can I achieve something similar ? Only the links that load on the page itself !
I'm looking forward into reading your suggestions, please enlighten me if you know!
You can download the webpage using a webclient instance
Then with the result string, you can get the urls using a regular expression
http://www.geekzilla.co.uk/view2D3B0109-C1B2-4B4E-BFFD-E8088CBC85FD.htm