I'm trying to query the Adzuna API, the documentation instructs me to write the following:
https://api.adzuna.com:443/v1/api/jobs/gb/jobsworth?app_id=1d3bc9c4&app_key=de61a42bf523e06f5b7ebe32d630e8fd
... i.e., the command generated by entering my Application ID1 and Application Keys2 into this example query generator
it issues the following response:
[1] 10123
bash: https://api.adzuna.com:443/v1/api/jobs/gb/jobsworth?app_id=1d3bc9c4: No such file or directory
[1]+ Exit 127
Which I guess means that I've done it completely wrong.
I also tried plugging my details into the example query the put in the documentation with stubs for credentials:
http://api.adzuna.com/v1/api/property/gb/search/1?app_id={YOUR_APP_ID}&app_key={YOUR_APP_KEY}
I have no experience querying APIs, I tried also appending that to wget and get and still the response was garbage.
How can I query this API in the right way?
1d3bc9c4
de61a42bf523e06f5b7ebe32d630e8fd
It seems this query A:
http://api.adzuna.com/v1/api/jobs/gb/search/1?app_id=1d3bc9c4&app_key=de61a42bf523e06f5b7ebe32d630e8fd&results_per_page=20&what=javascript%20developer&content-type=application/json
will work when put into a browser, a simple copy of my credentials into the example:
http://api.adzuna.com/v1/api/jobs/gb/search/1?app_id={YOUR API ID}&app_key={YOUR API KEY}&results_per_page=20&what=javascript%20developer&content-type=application/json
but is there a way to execute that without a browser? directly in the terminal- to expedite the process.
but when I tried query A with wget I got:
Related
Google Big Query has well documented ways to get the metadata about other objects (Table, Datasets, Routine, etc) using SQL like queries. I am using Python driver to execute those queries and getting expected result.
But I didn't found any query which can list the metadata about row access policies.
CREATE ROW ACCESS POLICY My_row_filter ON example_dataset.my_base_table3 GRANT TO ("domain:example.com") FILTER USING (lastName="Doe");
I have gone through the documentation and found the same can be displayed using bq command line tool.
bq ls --row_access_policies example_dataset.my_base_table3
Is there a way to get metadata related to row access policy via Python driver?
Your search is right, there is no DQL to Row-access police, as you can see in the documentation.
Also, if you check the Python Api for Bigquery, you will notice that this method to access the rowAccessPolicies is also not implemented.
Ways to go:
Open a feature request issue tracker
While you dont have the feature, implement it on python by yourself using the BigQuery provided API, here is how to use this method:
Basically you will have to make the proper authentication and a request to the API, like this:
(This is not the full code, just a way to go, please implement it properly)
import requests
from requests.exceptions import HTTPError
url = 'https://bigquery.googleapis.com/bigquery/v2/projects/{projectId}/datasets/{datasetId}/tables/{tableId}/rowAccessPolicies'
try:
response = requests.get(url)
response.raise_for_status()
print(response.content)
except HTTPError as http_err:
print(f'HTTP error occurred: {http_err}')
I'm running into a strange issue where I'm getting no results from queries to the CA Agile Rally API after switching to a new API key. Both API keys are created on subscription admin accounts and the queries are exactly the same so I'm not sure why I would have this issue.
It is really the strangest thing and I'm not sure how to troubleshoot it further. The issue only seems to affect queries that return 1 result (i.e query by ObjectID/ObjectUUID). Other queries with more than 1 result seem to be working as expected (unless it's an OR query with multiple ObjectID/ObjectUUIDs). I've also confirmed that I can get/update the artifact using the ref without a problem. When I switch back to my old API key and run the exact same query I get the desired result.
I'm using this package but I've also tested with my own Node JS request API calls and I have the same problem. What could I be missing here?
Needed to give the user a default workspace/project and it resolved the issue.
What would be the simplest tool/editor (ideally for Mac) to run web API queries (Stateless RESTful web API) in a loop in order to store Json results in a file ?
Very simple basically, trying just to automate the following :
- a first call to get a list of IDs
- then for each each ID, doing a call to get a few values related to this ID. Values are returned in a Json file, I would like to store them in a file (csv or excel)
To test the queries, I've used "Advanced REST client" to set a request with my authentication information header and do a few API queries tests, it works well but now I basically want to create a script to get the whole set of data which is returned and save in a file. With the idea to run this script from time to time. You can't to that with "Advanced REST client", right?
Sorry it's not (yet!) a super advanced question but any help would be greatly appreciated.
You may try Postman - definitely works on (accursed) Mac
There is bit of a confusion, wondering if somebody can help me with this.
Here is an example which is a year old and uses goapp with polymer and endpoint
https://github.com/googlesamples/cloud-polymer-go
Here is a recent example using gcloud
https://github.com/GoogleCloudPlatform/golang-samples/tree/master/appengine_flexible/endpoints
Both are different as google changed its approach.
As per second example, I am able to create endpoint functions which uses json for input and output. However there are 2 problems
1st. It is throwing error if I put functions in different file under same package. It works when I run go run .go. but then I dont understand how app.yaml come into picture. I think this url /_ah/spi/. should work
2nd. I am using postman app to check the request and response from endpoint. Is there a better way? I thought google does provide a platform to test endpoint.
Is there any example which implements examples similar to 1st one with new libraries?
looking forward for your help. Thanks.
I'll get this out of the way first: I'm an amateur programmer (at best)...i have some knowledge of how APIs work, but little to no experience manipulating the podio API directly (ie I use zapier/globiflow a lot and don't write any php/ruby). I'm sure other people can figure this out via the API documentation, but I can't. So i'm really hoping someone can help clarify and give some more detailed instruction.
My Overall objective:
I frequently export podio files as xlsx from the podio front-end. This is used by my team and me to do regular data analysis tasks in excel. I would like to make this process easier by automating the function of getting an updated podio export into my excel. My plan is to do this via excel VBA. I understand from other searching that it is possible to send an HTTP request using VBA, so i want to make sure i understand what I need to send to the Podio API to get what I need. The method of how to write the HTTP request in excel VBA is outside the intentional scope of this question (though i'd accept any help on this!)
What I've tried so far:
I know that 'get items as xlsx' is part of the podio API: https://developers.podio.com/doc/items/get-items-as-xlsx-63233
However I cannot seem to get this to work in the sandbox environment on that page so that i can figure out a valid request url. I get this message: 'Invalid filtering key' ... because i have no idea how to fill in that field. The information on that page is not clear on this. Nor is it evident on the referenced 'views page'. There are no examples to follow!
I don't even want to do any filtering. I just want to get ALL items in the app. Or i can give it a pre-existing view_id, but that doens't seem to work either without a {key}
I realise this is probably dead simple. Please help a noob? :)
Unfortunately, the interactive API sandbox does not behave appropriately for this particular endpoint. For filtering, this API endpoint expects query string parameters where the field-value pairs consist of integer field IDs and the allowed values for each field. Filtering by fields is totally optional. It looks like this sandbox page isn't built for this kind of operation with dynamic query string field names; wherever you see the {key} field on that page is meant as a placeholder for whatever field IDs that you would use for filtering.
If you want to experiment with this endpoint, I would encourage you to try another dedicated HTTP client first. I was able to get this simple example working with the command-line program wget:
wget --header="Authorization:OAuth2 $MY_SECRET_TOKEN" \
--content-disposition \
"https://api.podio.com/item/app/16476850/xlsx/"
In this case, wget downloaded an Excel file containing all the items in my app without any filtering applied. The additional --content-disposition argument tells wget to save the output as a file with a name using the information in the server's Content-Disposition response header.
With a filter applied:
wget --header="Authorization:OAuth2 $MY_SECRET_TOKEN" \
--content-disposition \
"https://api.podio.com/item/app/16476850/xlsx/?130654431=galaxy"
In this case, the downloaded file filtered the results to items where field id 130654431 (which is a category field) contain the value galaxy.