I used GetTwitter processor in Apache Nifi to process live tweets using UserId. Initially it works fine but after few hours no tweets is retrieved from Twitter.
May I know why I am getting this issue.
Note:
Since I am using 25000 user id to fetch live tweets from them. As I referred some blogs it shows 5000 UserId is maximum limit for GetTwitter processor. I used 5 GetTwitter processor with same access credentias to divide the 25000 user ids to 5 parts.
It sounds like you are encountering a rate limiting issue. Twitter's documentation on it can be found at https://developer.twitter.com/en/docs/basics/rate-limiting
Related
I have a problem with the Google Calendar API.
We are using the API with OAuth2 authentication in Python code.
It was working fine for about three years, but after today, it started to output the following error log.
<HttpError 403 when requesting https://www.googleapis.com/calendar/v3/calendars/**********************/events?alt=json returned "Quota exceeded for quota metric 'Queries' and limit 'Queries per day' of service 'calendar-json.googleapis.com' for consumer ‘project_number:*************'.">
However, the above log is not always output, and the error rate is about 30%.
We have set the allocation limit to 1,000,000, and there are about 3,000 queries for 24 hours.
Even though the number of times per day has not reached the limit, we are being told to use up our Quota for the day.
The API methods we are using are as follows
calendar.v3.Events.Insert
calendar.v3.Events.Get
calendar.v3.Events.Update
Delete calendar.v3.Events.
calendar.v3.CalendarList.
List calendar.v3.Calendars.
List calendar.v3.Acl.
Calendars.Insert calendar.v3.
Calendars.v3.Events.Insert is requested about 2000 times in 24 hours.
Calendars.Insert has been requested 2 times in 24 hours.
Has anyone encountered this before?
Thank you.
We have the exact same problem here.
Everything was working fine and suddenly starting this morning we have these 403 errors (RateLimitExceeded) coming up, with a 50% rate.
Our API usage is exactly the same as before. Our quota sits at 600 requests/minute/user and 1000000 requests per day. All we do is manual, so I don't see how we got past that limit suddenly...
It looks like other people have the same problem, it must be a bug in Google APIs.
The bug has been reported to Google already, see here: https://issuetracker.google.com/issues/182497593
I suggest you star this issue and wait for Google's answer.
I'm in the process of devolving a new tool for a company app. The tool will be sending homogeneous number of searches to amadeus API. Is every search result is considered as a request? A sample search of a user will have to search the api 1000 times are these searches considered as requests? Because if the company has 10000 request limit per month it's going to be over by 10 users! I need to understand this please.
Every time you call an API (every time you use GET/POST verb) you do a "request".
The limitation (quota) is only in the test environment, you don't pay for it but you have a limited number of calls and you only have access to a subset of data.
In production, you don't have any limitation on the total number of queries you can do. You get access to our full set of data (live) but you pay per use (you pay for each request you do).
You have a limitation on the number you can do per second (TPS: 10 in production / 5 in test).
I have routines that synchronize Class/Roster information between an SIS and Google Classroom. Everything has been running smoothly until very recently (11/1/2016). Now we're seeing the following message in all of our log files for routines that handle Classroom syncs.
Insufficient tokens for quota group and limit 'DefaultGroupUSER-100s' of service 'classroom.googleapis.com', using the limit by ID...
We perform batch requests whenever possible and these errors are showing up in individual batch "part" responses. The fact that these errors suddenly started showing up for ALL of our Classroom routines makes me think that something changed on the Google end of things.
I've been playing around with the throttling on our end by changing both the number of requests that we send in each batch (docs say that you can send 1000 per batch) as well as the total number of requests/batches that we're sending per 100 seconds (docs say you can send 50/s/client and also 5/s/user). Interestingly, the quotas indicated in the development console display slightly different but I assume they are to be interpreted in conjunction with one another.
I've throttled things down to the point where we're not even getting close to 5 requests per second and I'm still getting these errors back from the server.
Can someone provide some suggestions or solutions. Has anyone experienced this lately?
Let me know if I any additional information is needed.
I would like to do the following using OKTA api:
One time, I would like to pull the entire system log.
Going forward I would like to pull only the days log information.
The challenge that I am facing is whenever I get the logs, I only get 1000 records. How do I get the whole days log, it maybe more that 1000 records. Is there some body who can help me with a piece of code which shows how to do this.
Thanks
You can use the Events API to retrieve this information. This API supports Pagination so you can retrieve all the events for a particular filter (like all events after a certain point in time).
1000 is the default limit for the Events API because this object can potentially contain a lot of data.
However, you can specify how many records for a specific time range are returned via the Events API using filters. For example, the following GET statement would retrieve the first 100 successful login requests since 1-Mar.
https://{{YOUR_COMPANY}}.okta.com/api/v1/events?limit=100&filter=published gt "2015-01-01T00:00:00.000Z" and action.objectType eq “core.user_auth.login_success"
If there are more than 100 records, you can get the next set by passing rel=“next” in the next request header. If you wanted to get only messages for today, you could change the date.
I want to get all tracks with 0 to 1 plays and am looking at the playback_count stats from http://api.soundcloud.com/tracks/90891876.json?client_id=XXX URL, where playback_count is included in the json response. We have almost 1500 sound snippets, is it possible to make a script that fetches this data ~1500 times or will I get throttled for spamming the connection to the API? We will only use this stats a couple of times to measure how our campaign is going trying to increase plays. Or is it possible to get this data in just one request?
I saw this question just earned the "Tumbleweed" badge and I felt bad.
If the tracks are all owned by the same user, you can use this endpoint:
http://api.soundcloud.com/users/{id}/tracks
If you just have a list of tracks, you can use this endpoint:
http://api.soundcloud.com/tracks?ids=123,234,765,456,etc
See the "filters" section of the docs here: http://developers.soundcloud.com/docs/api/reference#tracks
But keep in mind that although the HTTP spec does not impose a limit on the length of the querystring, the default apache settings will return an error somewhere around 4000 characters. That's probably around 400 tracks for this endpoint. Play around with it. Maybe soundcloud has a limit on the number of tracks per query.
You could put your players embedded on a website (server) and track them with Analytics. I made a script for that: http://vitorventurin.com/tracking-soundcloud-with-google-analytics/