FogBugz api request for flat time estimates - api

I have a FogBugz application which gets data from their api and generates some reports. I need to get the hours estimate for each individual case I search for. From their API I can see that you get the hours, however those hours include all the ones of the child/dependent cases. I do not want those included in my results.
How can you request the time estimates for a specific case in fogBugz? As an analogy, If I do the current request with hrsEstimate I get a result which In FogBugz's WebApp equivalates to Grid View- Outline. I need to get the result that equivalates to Grid View - Flat.

Does column hrsCurrEst work better?
http://help.fogcreek.com/the-fogbugz-api/cases#Column_Titles

Related

Give the api's response back to the client in batches with pagination using flask/python?

I am trying to figure out that if there is something that can help the use case mentioned below:
My use case is that I have an API that gives the response for the DB for a given time period. Now for the smaller time period, there is no issue but if the time period for which the query is being made increases then it will add a significant amount of time until the API responds. I do not want the UI to keep loading until the full response is received.
Thus I was thinking that there should be some mechanism using which I can get the response from the API in an incremental fashion(in batch) so that I can show the user and the user do not have to wait until the API is executed completely.
Any help on coding or design would be greatly appreciated.
Well I did some R/D and was able to fulfill my requirement with the Pagination package of flask_mongoengine
http://docs.mongoengine.org/projects/flask-mongoengine/en/latest/
this has the sample implementations.

"Whats new" date restriction on API

I am developing an app with the Active Collab API using the What's New endpoint.
I am retrieving this regularly, so I have the latest information. I was wondering if there was a way to specify the activity by date to get the items since then?
For example, instead of getting 50 records and processing the 50 every time, if I could pass a from parameter (with a timestamp) to only collate activity since that time then that would help with both my processing and the size of the request (and knowing how many things have happened since)
Is this possible?
What's New API end point only has a daily filter:
/whats-new/daily/YYYY-MM-DD
Both global and daily What's new API end-points are paginated, so you can loop through responses by providing (and incrementing) page GET argument:
/whats-new?page=2
until you reach records that are older than the timestamp that you are looking for (or get an empty result). At that point, you just break and you have all the updates that were looking for.

What is the difference between parsing betting website for live scores vs official website API?

I want to monitor some live scores on soccer matches. I have 2 ways to do this:
official api from the website(free)
parse websites source code myself and get data from it( need to do it every second)
What is the difference? Is calling API faster?
This can depend on quite a lot external to this specific scenario, but given the context, yes the API's would much faster. The difference is in what data is being sent/received/parsed.
In either scenario you'd need some timer to tick and parse the results (website or API) so there's no performance difference in the "wait code", but the big difference will be in the data itself that is parsed. When you call the API, chances are more likely that you will send a specific parameter or call a specific function that indicates what you're looking for, pseudo-code example:
SoccerSiteApi.GetValue(SCORE, team1, team2);
Or
SoccerSiteApi.GetCurrentScores(team1, team2);
By calling the API, you are only sending and receiving a few hundred bytes (or more depending on data) and getting back exactly what you want, that is, you don't need to parse the scores out of the values sent back since they are the scores, so no processing time is spent doing anything additional with the data itself.
If, however, you were to parse the entire web site, you would need to make an HTTP GET request (and all that entails) to get the entire page (which could be a couple hundred KB or MB depending on content) and then spend processing time extracting the exact data you were looking for, and then doing this every second.
So the biggest difference is amount of data and time spent processing it.
Hope that can help

Obtaining System Log using Okta API

I would like to do the following using OKTA api:
One time, I would like to pull the entire system log.
Going forward I would like to pull only the days log information.
The challenge that I am facing is whenever I get the logs, I only get 1000 records. How do I get the whole days log, it maybe more that 1000 records. Is there some body who can help me with a piece of code which shows how to do this.
Thanks
You can use the Events API to retrieve this information. This API supports Pagination so you can retrieve all the events for a particular filter (like all events after a certain point in time).
1000 is the default limit for the Events API because this object can potentially contain a lot of data.
However, you can specify how many records for a specific time range are returned via the Events API using filters. For example, the following GET statement would retrieve the first 100 successful login requests since 1-Mar.
https://{{YOUR_COMPANY}}.okta.com/api/v1/events?limit=100&filter=published gt "2015-01-01T00:00:00.000Z" and action.objectType eq “core.user_auth.login_success"
If there are more than 100 records, you can get the next set by passing rel=“next” in the next request header. If you wanted to get only messages for today, you could change the date.

Github API Conditional Requests with paging

Context: let's say we want to retrieve whole list of Starred repositories by given User periodically (ones per day, hour or few minutes).
There are at least 2 approaches to do that:
execute GET to https://api.github.com/users/evereq/starred and use Url with rel='next' in 'Link' Response Headers to get next page Url (we should do that till we get no "next" page in response, mean that we reach the end). Seems that is recommended approach (by Github).
iterating 'page' parameter (from 1 to infinite) using GET to https://api.github.com/users/evereq/starred?page=XXX till you get 0 results in response. Ones you get 0 results, you finish (not recommended because for example instead of page numbers Github can move to "hash" values. Github already did it for some API operations.).
Now, let's say we want to make sure we use Conditional Requests (see https://docs.github.com/en/rest/overview/resources-in-the-rest-api#conditional-requests) to save our API usage limits (and traffic, trees in the world, etc.).
So we add for example 'If-None-Match' to our Requests Headers and check if response Status is 304 (Not Modified). If so, it means that nothing was changed from our last request. That works OK.
The issue however that what we have in 1) and 2) above, related to the way how we detect when to stop is NOT working anymore when you use Conditional Requests!
I.e. with approach 1), you don't get Link Response Headers at all when you use Conditional Requests.
So you will need to execute one more request with page bigger than page for which you already have ETag and see that it return 0 results and than you know you are done. That way you basically "waste" one request to Github API (because it miss Conditional Requests Headers).
Same with approach 2), you basically have 0 responses in every request with status 304... So again, to know you are done, you need to make at least one additional request which do return 0 results.
So the question is: when we do conditional requests with the fact that Github API does not send back Link Response Header (at least with queries using ETag which result Status 304) how could we know when to stop paging? Is it a bug in Github API implementation OR I miss something?
We don't know maximum page number, so to get when to stop we should execute one more "waste" request and check if we get 0 results back!
I also can't found how to query Github for total count of starred repositories (so I can calculate how many pages I should iterate in advice), same as responses does not include something like "X-Total-Count" so I know when to stop using simple math for pages count.
Any ideas how to save that one ('end') request and still use Conditional Requests?
If you do one request per day, it's OK to accept such waste, but what if you do such request ones per minute? You will quickly use all your API usage Limits!
UPDATE
Well, after a few more tests, I see now following "rule" (can't however found it anywhere in the docs, so note sure if its rule or just assumption): if user star something new, result for EVERY requested page contains different ETag value compared to previous and does not have status 304 anymore! That means that it's enough to just request first page and check for status. if its 304 (not modified), we do NOT need to check next pages, ie we are DONE as nothing was changed in any page. Is it correct approach or just coincidence?
We indeed return pagination relations in the Link response header when the content has changed 1. Since we don't support a since parameter for that call, you'll need to sort by most recent results and maintain a client-side cursor for the last known ID or timestamp (based on your sort criteria) and stop paging when it shows up in your paginated results. Conditional requests will just let you know if Page 1 has changed.
We haven't settled on a way to return counts on our listing methods, but a really low-tech solution is to set the page size to 1, grab the rel=last Link relation and check its page parameter value.
Hope that helps.