I would like to do the following using OKTA api:
One time, I would like to pull the entire system log.
Going forward I would like to pull only the days log information.
The challenge that I am facing is whenever I get the logs, I only get 1000 records. How do I get the whole days log, it maybe more that 1000 records. Is there some body who can help me with a piece of code which shows how to do this.
Thanks
You can use the Events API to retrieve this information. This API supports Pagination so you can retrieve all the events for a particular filter (like all events after a certain point in time).
1000 is the default limit for the Events API because this object can potentially contain a lot of data.
However, you can specify how many records for a specific time range are returned via the Events API using filters. For example, the following GET statement would retrieve the first 100 successful login requests since 1-Mar.
https://{{YOUR_COMPANY}}.okta.com/api/v1/events?limit=100&filter=published gt "2015-01-01T00:00:00.000Z" and action.objectType eq “core.user_auth.login_success"
If there are more than 100 records, you can get the next set by passing rel=“next” in the next request header. If you wanted to get only messages for today, you could change the date.
Related
I am developing an app with the Active Collab API using the What's New endpoint.
I am retrieving this regularly, so I have the latest information. I was wondering if there was a way to specify the activity by date to get the items since then?
For example, instead of getting 50 records and processing the 50 every time, if I could pass a from parameter (with a timestamp) to only collate activity since that time then that would help with both my processing and the size of the request (and knowing how many things have happened since)
Is this possible?
What's New API end point only has a daily filter:
/whats-new/daily/YYYY-MM-DD
Both global and daily What's new API end-points are paginated, so you can loop through responses by providing (and incrementing) page GET argument:
/whats-new?page=2
until you reach records that are older than the timestamp that you are looking for (or get an empty result). At that point, you just break and you have all the updates that were looking for.
I've noticed when retrieving a story/defect after first updating it, sometimes the retrieve response returns the field values as if the update never happened. Retrying the retrieve after a short delay (~500ms) returns the updated field values as expected. Is this a known behaviour? Is there any way of avoiding this?
I'm using the Rally API 2.0 - https://rally1.rallydev.com/slm/webservice/v2.0/
The update is being performed using this URI:
POST /slm/webservice/v2.0/Defect/14173461229?key=<key> HTTP/1.1
I'm retrieving the story after update as follows:
GET /slm/webservice/v2.0/artifact?query=(ObjectId%20=%2014173461229)&start=1&pagesize=20&fetch=true HTTP/1.1
What is your integration doing that it needs to re-poll the artifact within < 1 second of POST'ing an update? Is there a second process that does polling that is revealing the latency for the updates? Does your integration run multiple threads? Does the response time vary at all depending on time of day, etc.? There are any number of factors that could be at play here, but 500 ms doesn't seem like an un-reasonable refresh rate given factors such as latency over HTTP/S as well as server-side database and cache updates. That said, for an in-depth look you may wish to inquire with Rally Support (rallysupport#rallydev.com) as they have tools that can help evaluate server-side response time corresponding to requests by specific UserID.
Context: let's say we want to retrieve whole list of Starred repositories by given User periodically (ones per day, hour or few minutes).
There are at least 2 approaches to do that:
execute GET to https://api.github.com/users/evereq/starred and use Url with rel='next' in 'Link' Response Headers to get next page Url (we should do that till we get no "next" page in response, mean that we reach the end). Seems that is recommended approach (by Github).
iterating 'page' parameter (from 1 to infinite) using GET to https://api.github.com/users/evereq/starred?page=XXX till you get 0 results in response. Ones you get 0 results, you finish (not recommended because for example instead of page numbers Github can move to "hash" values. Github already did it for some API operations.).
Now, let's say we want to make sure we use Conditional Requests (see https://docs.github.com/en/rest/overview/resources-in-the-rest-api#conditional-requests) to save our API usage limits (and traffic, trees in the world, etc.).
So we add for example 'If-None-Match' to our Requests Headers and check if response Status is 304 (Not Modified). If so, it means that nothing was changed from our last request. That works OK.
The issue however that what we have in 1) and 2) above, related to the way how we detect when to stop is NOT working anymore when you use Conditional Requests!
I.e. with approach 1), you don't get Link Response Headers at all when you use Conditional Requests.
So you will need to execute one more request with page bigger than page for which you already have ETag and see that it return 0 results and than you know you are done. That way you basically "waste" one request to Github API (because it miss Conditional Requests Headers).
Same with approach 2), you basically have 0 responses in every request with status 304... So again, to know you are done, you need to make at least one additional request which do return 0 results.
So the question is: when we do conditional requests with the fact that Github API does not send back Link Response Header (at least with queries using ETag which result Status 304) how could we know when to stop paging? Is it a bug in Github API implementation OR I miss something?
We don't know maximum page number, so to get when to stop we should execute one more "waste" request and check if we get 0 results back!
I also can't found how to query Github for total count of starred repositories (so I can calculate how many pages I should iterate in advice), same as responses does not include something like "X-Total-Count" so I know when to stop using simple math for pages count.
Any ideas how to save that one ('end') request and still use Conditional Requests?
If you do one request per day, it's OK to accept such waste, but what if you do such request ones per minute? You will quickly use all your API usage Limits!
UPDATE
Well, after a few more tests, I see now following "rule" (can't however found it anywhere in the docs, so note sure if its rule or just assumption): if user star something new, result for EVERY requested page contains different ETag value compared to previous and does not have status 304 anymore! That means that it's enough to just request first page and check for status. if its 304 (not modified), we do NOT need to check next pages, ie we are DONE as nothing was changed in any page. Is it correct approach or just coincidence?
We indeed return pagination relations in the Link response header when the content has changed 1. Since we don't support a since parameter for that call, you'll need to sort by most recent results and maintain a client-side cursor for the last known ID or timestamp (based on your sort criteria) and stop paging when it shows up in your paginated results. Conditional requests will just let you know if Page 1 has changed.
We haven't settled on a way to return counts on our listing methods, but a really low-tech solution is to set the page size to 1, grab the rel=last Link relation and check its page parameter value.
Hope that helps.
I am building an app that accesses the QuickBooks API v2.
I am looking for a way to retrieve only data that has changed.
For example, from time to time want to be able to check to see if there have been any changes to the chart of accounts in the QB data. Is there a quick way to do this without parsing a large response body? Maybe something like requesting and comparing just a checksum, and then requesting the whole chart of accounts to compare and update if there is a change? Or even just requesting the changes that occurred after a certain date?
This need is not just limited to the chart of accounts. For example, I may want to update historic transaction data, but only with the changes (e.g., a change to an old transaction), not the entire db which can be quite large.
Answer
In further reading the API docs, I should be able to filter the response using the created_at and updated_at metadata.
The filter is called Change Data Capture (CDC)
https://developer.intuit.com/docs/0025_quickbooksapi/0050_data_services/v2/0500_quickbooks_windows/0100_calling_data_services/0015_retrieving_objects
<ItemReceiptQuery xmlns='http://www.intuit.com/sb/cdm/v2'>
<CDCAsOf>2010-12-04T09:30:47.0Z</CDCAsOf>
</ItemReceiptQuery>
thanks
Jarred
I'm trying to aggregate some information about the kanban states of my user stories. If query a PifTeam item, I get a summarized collection of UserStories associated with it.
Example query:
https://rally1.rallydev.com/slm/webservice/1.40/portfolioitem/pifteam/99999999999.js
However I then have to run a loop on the UserStories collection, individually querying each one to get at the information I need. This potentially results in a lot of web service calls.
Is there a way to return the full hierarchical requirement information in the original pifteam query so that there is only one webservice call which returns all sub-objects? I read the webservice api and was trying to play with the fetch parameter but had no success.
This functionality will be disabled in WSAPI 2.0 but will continue to be available in the 1.x versions. That said, you should be able to use a fetch the fields on story that you need like this:
/pifteam/9999.js?fetch=UserStories,FormattedID,Name,PlanEstimate,KanbanState
Fetch will hydrate the fields specified on sub objects even if the root object type doesn't have those fields. So by fetching UserStories the returned collection will populated with stories, each having the FormattedID, Name, PlanEstimate and KanbanState fields included.
There is no way to do it from Rally's standard Web Services API (WSAPI) but you can from the new Lookback API (LBAPI). The query would look something like this:
https://rally1.rallydev.com/analytics/v2.0/service/rally/workspace/<ObjectID_for_Workspace>/artifact/snapshot/query.js?find={__At:"current",_TypeHierarchy:"HierarchicalRequirement",Children:null,_ItemHierarchy:<ObjectID_for_PortfolioItem>}&fields=["Name"]
Fill in the ObjectIDs for your Workspace and PortfolioItem. The _ItemHierarchy field will cross work item type boundaries and goes all the way from PortfolioItems down through the Story hierarchy down to Defects and even Tasks, so I added _TypeHierarchy:"HierarchicalRequirement" to limit it to Stories. I have specified Children:null which means you'll only get back leaf Stories. The __At:"current" clause get's the current tree and values. Remember, it's the "Lookback" API, so you can retrieve the state of the object at any moment in history. __At:"current" says to get the current values and tree.
Note, the LBAPI is delayed from current values in the system by anywhere from seconds to minutes. Typically it's about 30 seconds behind. You can see how far behind it is by checking the ETLDate field in the response.
Details about the LBAPI can be found here. Note, that the LBAPI is available in preview now for almost all Rally customers. There are still a number of customers where it is not yet turned on. The best way to tell if it's working for your subscription is to try the query.