I am trying to pull data from the below 3rd party API.
curl --request GET \
--url 'https://api.sendgrid.com/v3/messages?limit=1000&query=last_event_time%20BETWEEN%20TIMESTAMP%20%22{start_date}%22%20AND%20TIMESTAMP%20%22{end_date}%22' \
--header 'authorization: Bearer <<your API key>>'
The problem I am having is the API has a max limit of 1000 rows. How could I workaround this as I already have the data filters in place but my expected output is around 30k
Related
I am attempting to use Postman and OKTA API collections to populate group memberships for over 1,000 users and several different groups.
This request works when populating a static group ID and static user ID in the request however any attempts I make to autogenerate the userId from a CSV file continue to fail with "method not supported"
PUT /api/v1/groups/${groupId}/users/${userId}
Sample Curl
curl -v -X PUT \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-H "Authorization: SSWS ${api_token}" \
"https://${yourOktaDomain}/api/v1/groups/00g1fanEFIQHMQQJMHZP/users/00u1f96ECLNVOKVMUSEA"
I am sending a request like below:
curl --request GET \
--url 'https://api.pagerduty.com/services?query=my-service-name' \
--header 'Accept: application/vnd.pagerduty+json;version=2' \
--header 'Authorization: Token token=y_NbAkKc66ryYTWUXYEu' \
--header 'Content-Type: application/json'
I was expecting that i can use this to filter the list of services JSON to my service only using query=my-service-name. But this just returns a JSON list of first 25 services. API Guide says:
query(string) - Filters the result, showing only the tags whose labels match the query.
Is there any way to get the details of service just with the service name? Currently i can add a huge limit to the query which will essentially bring all service names and i can get my service from that but that is hardly efficient.
I know i can do GET with service ID, Like below:
curl --request GET \
--url https://api.pagerduty.com/services/SVC_ID \
--header 'Accept: application/vnd.pagerduty+json;version=2' \
--header 'Authorization: Token token=y_NbAkKc66ryYTWUXYEu' \
--header 'Content-Type: application/json'
but my requirement is to use the service name.
Your question lines up on the time-frame of when PagerDuty broke the query filter for services. Your query looks accurate from my experience.
See:
https://community.pagerduty.com/forum/t/query-filter-is-not-working/3853
Also, while the description of the API says "filters on tags" you'll find it filters on the name of the service first.
I'm trying to use the Active Collab API to retain project information for reporting purposes. I basically just want to make a daily API call and safe the JSON for further reporting in another tool.
For this reason I don't want to use an SDK or anything, just a plain API call to retain the data.
Can someone please guide me because I couldn't find the correct url structure for a self-hosted system.
I found a solution with the help from the support.
First you have to obtain a token like this:
curl --location --request POST 'https://YOUR.SITE/api/v1/issue-token' \
--header 'Content-Type: application/json' \
--data-raw '{
"username" : "YOUR EMAIL",
"password" : "YOUR PASSWORD",
"client_name" : "xxx",
"client_vendor" : "xxx"
}'
And in the next step you can use the API as documented here by calls like this one:
curl --location --request GET 'https://YOUR.SITE/api/v1/projects' \
--header 'Content-Type: application/json' \
--header 'X-Angie-AuthApiToken: YOUR TOKEN'
How can I get all the data from the bounced emails using API call.
When I use my API call to get data, I only retrieve first 500 records using curl, Can it be possible to get all the data and download in a file.
The command which I am using is as below ... I tried to increase limit of the API call with 501 below was the error ? Can any one let me know how to resolve ?
curl --request GET \
--url https://${API_CALL} \
--header 'accept: application/json' \
--header 'authorization: Bearer ${AUTH_TOKEN}' \
curl --request GET \
--url https://${API_CALL}?limit=501 \
--header 'accept: application/json' \
--header 'authorization: Bearer ${AUTH_TOKEN}' \
{"errors":[{"field":"limit","message":"must be between 0 and 500"}]}
I'm working on a project and i need a triple-store database in cloud, which support SPARQL queries.
GraphDB looks good and works fine in my desktop computer (localhost). But, when I try to use it in the cloud (CloudDB), REST requisitions doesn't work.
Problem: I'm trying to query my repository using REST, by curl -X GET --header 'Accept: application/sparql-results+xml'.
Repository ID: hermesiot
Query: select * where {?s ?p ?o .} limit 100
Results:
Response Code: 404
Response Body: {"message":"Database not found."}
How to deploy GraphDB in cloud solutions, like Azure or another free solutions?
Many thanks :)
According to the official example, your query should be of this kind:
curl --header 'Accept: application/sparql-results+xml' \
--data "query=SELECT+*+{?s+?p+?o.}" \
--user s472kd733007:bhrfk1aa8o0qlj7 \
'https://rdf.s4.ontotext.com/4032537848/wikidata/repositories/fast'
However, the query above does not work for me, whereas the query below does:
curl --header 'Accept: application/sparql-results+xml' \
--data "query=SELECT+*+{?s+?p+?o.}" \
--user s472kd733007:bhrfk1aa8o0qlj7 \
'http://awseb-e-m-awsebloa-11laimnu18r2i-2106042490.eu-west-1.elb.amazonaws.com/4032537848/wikidata/repositories/fast'
In the queries above:
s472kd733007 — API key,
bhrfk1aa8o0qlj7— API key secret,
4032537848 — user id
wikidata — database name,
fast — repository id,
http://awseb-e-m-awsebloa-...eu-west-1.elb.amazonaws.com — AWS instance address.
Visit tabs in your dashboard in order to obtain applicable values of these parameters.
As for update queries, see e. g. this answer. Your query should be:
curl --header 'Accept: application/sparql-results+xml' \
--data "update=INSERT+DATA+{owl:Nothing+owl:Nothing+owl:Nothing}" \
--user s472kd733007:bhrfk1aa8o0qlj7 \
'http://awseb-e-m-awsebloa-11laimnu18r2i-2106042490.eu-west-1.elb.amazonaws.com/4032537848/wikidata/repositories/fast/statements'
Please note that endpoint address in this request is different.