I'm having some trouble with the "list" API call - it works OK for about 10 pages of results (feeding in the value of nextPageToken into subsequent calls), but then I get a 500 error (always at the same point). Is there a limit on the number of pages of results that can be listed?
Also, just confirming - the discovery file has a maxResults parameter but it seems to have no effect - is this correct?
Cheers,
Miles
I'm an engineer at Google. We had a bug related to certain unicode characters in the titles of surveys that was causing some List requests to fail. We've fixed the decoding issue now, so list should be able to work to access the rest of your remaining surveys now.
maxResults is not implemented by the API and should not be used.
Thanks for reaching out about this issue.
Related
I am using Mailchimp's archive URL in PHP -- I am simply fetching the URL and displaying it as it sits, in order to white lable the funky URL IE
https://us17.campaign-archive.com/home/?u=xxxxxyyyyyxxxxyyyy&id=xxxxyyyyyxxxxyy
In doing so I have read through both the Archive and API documentation, and have found nothing on the parameter for row count. It defaults to 20 as stated in the Archive docs, but I know I have seen archives with a larger row count than that. Is anyone familiar enough with the URL parameters used by MailChimp to increase the row count, to say, 100? IE
https://us17.campaign-archive.com/home/?u=xxx&id=yyy&count=100
It's been a problem for years. Even in 2022 there is still no known way for an end-user to get more than past 20 issues from mailchimp, they simply refuse to add/allow that ability.
However the newsletter creator can go into their backend and generate/enable a javascript API that has the &show= parameter, which can be increased.
https://mailchimp.com/help/add-an-email-campaign-archive-to-your-website/
Again, only the campaign creator can do this, not some random end-user/reader.
I am having a dilema. I need to fetch data for some products by Id, these products which are selected can vary from a couple to thousands.
I see and tested that GET is not possible due to exceeding the HeaderSizeLimit of 8192.
I had discussions with colleagues and changed to POST and the ids are in the body. Everything works but have a lot of discussions about this. Have you encountered something like this? What was your approach?
First question for me is, do you really pass all those ids in a single request? How is this list of IDs generated in the first place? Could the server know this list in advance?
For example, if the list of IDs is obtained by doing a search query on the same server, perhaps that search query can already emit the list of entities.
I find that in most cases this can be avoided, but there's some exceptions.
If you find that you can't avoid this, I would suggest you use the new http QUERY method instead of of POST, but POST should be fine too as a fallback.
Context: let's say we want to retrieve whole list of Starred repositories by given User periodically (ones per day, hour or few minutes).
There are at least 2 approaches to do that:
execute GET to https://api.github.com/users/evereq/starred and use Url with rel='next' in 'Link' Response Headers to get next page Url (we should do that till we get no "next" page in response, mean that we reach the end). Seems that is recommended approach (by Github).
iterating 'page' parameter (from 1 to infinite) using GET to https://api.github.com/users/evereq/starred?page=XXX till you get 0 results in response. Ones you get 0 results, you finish (not recommended because for example instead of page numbers Github can move to "hash" values. Github already did it for some API operations.).
Now, let's say we want to make sure we use Conditional Requests (see https://docs.github.com/en/rest/overview/resources-in-the-rest-api#conditional-requests) to save our API usage limits (and traffic, trees in the world, etc.).
So we add for example 'If-None-Match' to our Requests Headers and check if response Status is 304 (Not Modified). If so, it means that nothing was changed from our last request. That works OK.
The issue however that what we have in 1) and 2) above, related to the way how we detect when to stop is NOT working anymore when you use Conditional Requests!
I.e. with approach 1), you don't get Link Response Headers at all when you use Conditional Requests.
So you will need to execute one more request with page bigger than page for which you already have ETag and see that it return 0 results and than you know you are done. That way you basically "waste" one request to Github API (because it miss Conditional Requests Headers).
Same with approach 2), you basically have 0 responses in every request with status 304... So again, to know you are done, you need to make at least one additional request which do return 0 results.
So the question is: when we do conditional requests with the fact that Github API does not send back Link Response Header (at least with queries using ETag which result Status 304) how could we know when to stop paging? Is it a bug in Github API implementation OR I miss something?
We don't know maximum page number, so to get when to stop we should execute one more "waste" request and check if we get 0 results back!
I also can't found how to query Github for total count of starred repositories (so I can calculate how many pages I should iterate in advice), same as responses does not include something like "X-Total-Count" so I know when to stop using simple math for pages count.
Any ideas how to save that one ('end') request and still use Conditional Requests?
If you do one request per day, it's OK to accept such waste, but what if you do such request ones per minute? You will quickly use all your API usage Limits!
UPDATE
Well, after a few more tests, I see now following "rule" (can't however found it anywhere in the docs, so note sure if its rule or just assumption): if user star something new, result for EVERY requested page contains different ETag value compared to previous and does not have status 304 anymore! That means that it's enough to just request first page and check for status. if its 304 (not modified), we do NOT need to check next pages, ie we are DONE as nothing was changed in any page. Is it correct approach or just coincidence?
We indeed return pagination relations in the Link response header when the content has changed 1. Since we don't support a since parameter for that call, you'll need to sort by most recent results and maintain a client-side cursor for the last known ID or timestamp (based on your sort criteria) and stop paging when it shows up in your paginated results. Conditional requests will just let you know if Page 1 has changed.
We haven't settled on a way to return counts on our listing methods, but a really low-tech solution is to set the page size to 1, grab the rel=last Link relation and check its page parameter value.
Hope that helps.
I wrote a little Script using Python and Tweepy to save the tweets for a list of users and also to get some basic properties for those accounts.
Somehow the number of tweets stated in the user profile under statuses_count
(for an example of the json description of an account:
https://api.twitter.com/1/users/show.json?screen_name=TwitterAPI&include_entities=true )
does not match the number of tweets i get when iterating through the tweets of the same users profile.
I am aware of the fact, that twitter limits the number of tweets per user available through the API to 3200 and even does not guarantee this number, but this behavior does even occur with users who have well less than 3200 tweets
My question is, whether this difference is common and why this happens?
Is this just an issue of the twitter API, is it caused by deleted tweets (maybe they still count for statuses_count but can not be fetched anymore?), ...?
Thanks!
Thomas
I haven't messed with the Twitter API in several months, but I remember back when I was working with it I found inconsistencies due to retweets not showing up when iterating tweets, but getting counted in the number of Tweets. This seems to corroborate that, but its several months old and things may have changed since then.
Make sure include_rts is set to true, t, or 1 (in addition to specifying the same for include_entities, which you have done). When these aren't included by default (e.g. user lists) then you can get fewer tweets than what you specified with count.
The Twitter API documentation isn't clear on what the defaults are so it's safer to explicitly specify these optional parameters. And since you're specifically working with the user timeline you might also want exclude_replies turned off.
I am writing a small assistant app to read (well, filter/rank) /r/programming/ for me, because it has so many damn posts, and because certain area of my coding skills was getting rusty and it sounds like good exercise.
I am getting items from "new" page of the subreddit using json api; however it only returns 25 items per request (which is the page size), so to retrieve items for, say, last week, I need to make dozens of requests. As the mandated request interval is 2s, it is painful.
I wonder if there's some way to retrieve more items? Query string parameters for standard html gets also work for json gets, but I cannot find one for page size.
EDIT: for posterity, the paramrter name is "limit", although that too is capped at 100
for posterity, the parameter name is "limit", although that too is capped at 100