How can i get metrics from Bitly API with custom date range? - api

I have been trying to use Bitly API.
This is the documentation I have been following.
I am trying to call a GET request over /v4/bitlinks/{bitlink}/clicks as shown in the above documentation.
params = {
'unit' : 'day',
'units' : -1
}
response = requests.get(f'https://api-ssl.bitly.com/v4/bitlinks/{domain}/2MIHzhO/clicks', headers=headers, params=params)
gives me 30 results i.e. for the last month (30 day wise entries).
However I was expecting more entries beyond that.
If i use units=40 instead, I get the following error with code 402
{'message': 'UPGRADE_REQUIRED',
'resource': 'bitlinks',
'description': 'Metrics request exceeds maximum queryable date range.'}
Is there a limit to the date range I can query because it is not explicitly stated in the API?

Yes, your guess is right. There is a limit to the date range you can query in bitly free plan - the link history limit is 30 days. That's why you get this error message when trying to query for metrics for 40 days.
You can find different bitly plans info here

Related

How do I write a query using the VersionOne API to return all the hours (actuals) recorded under an Epic?

I need to write a query using the VerisonOne API to return all the time (effort) recorded against tasks under a specific Epic. My goal is to have the query be a one line statement I can enter into the address bar of my browser.
I've tried the following using the rest-1.v1 query:
http://<>/VersionOne/rest-1.v1/Data/Epic?sel=Epic.ID.Number,SubsAndDown:PrimaryWorkitem[AssetState=%27Closed%27].Actuals.Value.#Sum&where=Epic.ID.Number=%27E-06593%27
http://<>/VersionOne/rest-1.v1/Data/Story?sel=Story.ID.Number,Story.Name,SuperAndUp.Number,SuperAndUp.Actuals.#Sum&where=Story.SuperAndUp.ID.Number=%27E-06593%27
Below is the output from the first query above. (similar results from the second query)
Assets total="1" pageSize="2147483647" pageStart="0"
Asset href="/VersionOne/rest-1.v1/Data/Epic/1481442" id="Epic:1481442"
Attribute name="SubsAndDown:PrimaryWorkitem[AssetState='Closed'].Actuals.Value.#Sum"/
/Asset
/Assets
Actual results were no hours returned. I expected to have ~4,320 hours returned (the total under the Epic E-06593) after the ...#Sum"/
On your first query
http://<>/VersionOne/rest-1.v1/Data/Epic?sel=Epic.ID.Number,SubsAndDown:PrimaryWorkitem[AssetState=%27Closed%27].Actuals.Value.#Sum&where=Epic.ID.Number=%27E-06593%27,
If you change to AssetState!=Closed then you will get results. Beware there could be another AssetState that might mess with your total hours.
You might want filter down to AssetState= "64" or "Active".
See here for https://community.versionone.com/VersionOne_Connect/Developer_Library/Getting_Started/Platform_Concepts/Asset_State

Podio API filtering by date range

I'm trying to filter tasks by date range and I'm getting errors whatever I try. This is how my request looks like: http://api.podio.com/task?completed=true&created_on%5Bfrom%5D=2016-06-23&created_on%5Bto%5D=2016-06-28&limit=100&offset=0&sort_by=rank&sort_desc=false&space=4671314
Here I'm trying to filter by created_on and I'm suplying {from: "2016-06-23", to: "2016-06-28"} but it's always returning the same error - invalid filter. I'm trying to filter tasks that are created in the last 5 days here.
The tasks API reference can be found in their API docs.
What am I doing wrong?
Date ranges can be separated by -.
To display "all my tasks created between 1st Jan 2014 and 1st Jan 2016" :-
/task?created_on=2014-01-01-2016-01-01&responsible=0'
Podio API get task filtering by date range use below :
/task/?created_on=2017-04-25-2017-05-01&offset=0&sort_by=rank&sort_desc=false&space=xxxxxxx

Gmail API : resultSizeEstimate gives odd numbers

https://developers.google.com/gmail/api/v1/reference/users/messages/list
The Gmail API allows to retrieve an estimate for the number of message for a given query (from:send1#gmail.com is:unread). Somehow the number that the API returns seems very different from the one shown on the webmail.
Any idea on how to return the actual number?
resultSizeEstimate is only an estimate and isn't guaranteed to be accurate for general queries. It should give more reasonable (still estimated) numbers for queries on specific labels ("label:MYLABEL" or "label:MYLABEL is:unread").
Unfortunately, there currently isn't a method of getting the actual numbers, other than retrieving all of them and looking at the size of the list returned.
Partial answer - I was hitting the 103 limit as well.
I've noticed that by playing with the maxResults parameter of the Google API I get different, yet still invalid, results.
With this value of maxResults:
request = service.users().messages().list(userId=user_id, labelIds=formatted_labels)
--> resultSizeEstimate = 103
With this value of maxResults:
request = service.users().messages().list(userId=user_id, labelIds=formatted_labels, maxResults=100)
--> resultSizeEstimate = 103
With this value of maxResults:
request = service.users().messages().list(userId=user_id, labelIds=formatted_labels, maxResults=1000)
--> resultSizeEstimate = 511
With this value of maxResults:
request = service.users().messages().list(userId=user_id, labelIds=formatted_labels, maxResults=50)
--> resultSizeEstimate = 52
This is not exactly the solution to all cases but it may help. You can get the exact number of emails under a given label using Users.labels:get
https://developers.google.com/gmail/api/v1/reference/users/labels/get
The messagesTotal value gives you that number.
Unfortunately, as others notice, listing messages with Users.messages:list or doing a search do not return accurate results.

Get ALL tweets, not just recent ones via twitter API (Using twitter4j - Java)

I've built an app using twitter4j which pulls in a bunch of tweets when I enter a keyword, takes the geolocation out of the tweet (or falls back to profile location) then maps them using ammaps. The problem is I'm only getting a small portion of tweets, is there some kind of limit here? I've got a DB going collecting the tweet data so soon enough it will have a decent amount, but I'm curious as to why I'm only getting tweets within the last 12 hours or so?
For example if I search by my username I only get one tweet, that I sent today.
Thanks for any info!
EDIT: I understand twitter doesn't allow public access to the firehose.. more of why am I limited to only finding tweets of recent?
You need to keep redoing the query, resetting the maxId every time, until you get nothing back. You can also use setSince and setUntil.
An example:
Query query = new Query();
query.setCount(DEFAULT_QUERY_COUNT);
query.setLang("en");
// set the bounding dates
query.setSince(sdf.format(startDate));
query.setUntil(sdf.format(endDate));
QueryResult result = searchWithRetry(twitter, query); // searchWithRetry is my function that deals with rate limits
while (result.getTweets().size() != 0) {
List<Status> tweets = result.getTweets();
System.out.print("# Tweets:\t" + tweets.size());
Long minId = Long.MAX_VALUE;
for (Status tweet : tweets) {
// do stuff here
if (tweet.getId() < minId)
minId = tweet.getId();
}
query.setMaxId(minId-1);
result = searchWithRetry(twitter, query);
}
Really it depend on which API system you are using. I mean Streaming or Search API. In the search API there is a parameter (result_type) that is an optional parameter. The values of this parameter might be followings:
* mixed: Include both popular and real time results in the response.
* recent: return only the most recent results in the response
* popular: return only the most popular results in the response.
The default one is the mixed one.
As far as I understand, you are using the recent one, that is why; you are getting the recent set of tweets. Another issue is getting low volume of tweets that have the geological information. Because there are very few users added the geological information to their profile, you are getting very few tweets.

Foursquare Venue API & Number of Results, in a more efficient way?

I'd like to ask if there is a more efficient way to get more than 50 results besides these options?
How do I get more locations?
Foursquare Venue API & Number of Results
and this, which is for the old API Foursquare API nearByVenue service issue
I'm using the current foursquare api for the venue search https://developer.foursquare.com/docs/venues/search .
What I'd like is something like an offset option, in order to get more results, but it seems that there isn't such an option.
Is there an alternative solution?
Thank you in advance.
you should use venues explore with offset and limit as paramters,
venues explore gives you totalResults and you can use this response to calculate number of pages you need in paginate
for example assume totalResults is 90(pay attention at offset and limit parametr value )
in first request:
https://api.foursquare.com/v2/venues/explore?client_id=client_id&client_secret=client_secret&v=20150825&near=city_name&categoryId=category_id&intent=browse&offset=0&limit=30
in second request:
https://api.foursquare.com/v2/venues/explore?client_id=client_id&client_secret=client_secret&v=20150825&near=city_name&categoryId=category_id&intent=browse&offset=30&limit=30
in third request:
https://api.foursquare.com/v2/venues/explore?client_id=client_id&client_secret=client_secret&v=20150825&near=city_name&categoryId=category_id&intent=browse&offset=60&limit=30
for 90 results you can get all records with above three request
There is actually another option not mentioned here (not pagination though)
Using the (experimental?) categoryId filter.
You can search for a single point (ll) a few times with different category ids, giving you many results (some duplicates as venues can have more than one category).
So you can search for 'Food' venues and 'Nightlife' venues at the same place, getting 100 results in stand of 50.. as said it is 100 results, but not unique results, could be duplicates. I think that is more efficient then trying to play around with the browse radius thing.
Not pagination, but will give a lot more results than a normal search - usually enough even in urban areas.
But yea, having some sort of way to extract more than 50 on a single point is not possible, but could be nice :)
Afraid not. Currently there is no pagination, in order to find more venues you need to move your search area around as in the answers you highlighted. I agree, pagination would be handy though!
For the explorer endpoint this worked for me: If the maximum number of results that is returned for instance is 100, just use offset=100 in the next call which gives you the next 100 results starting from 100 (the offset). Iterate (e.g. using while loop) and keep increasing offset by 100 until you reach the total number or results (which is returned in in the api for totalResults).
My first stack overflow post, tried to answer as clearly as possible
def getNearbyVenues(neighborhoods, latitudes, longitudes, radius=500,ven_num=300):
venues_list=[]
for name, lat, lng in zip(neighborhoods, latitudes, longitudes):
i=0
while (i < ven_num+50):
url = 'https://api.foursquare.com/v2/venues/explore?&client_id={}&client_secret={}&v={}&ll={},{}&radius={}&offset={}&limit={}'.format(
CLIENT_ID,
CLIENT_SECRET,
VERSION,
lat,
lng,
radius,
i,
LIMIT)
# make the GET request
results = requests.get(url).json()['response']['groups'][0]['items']
# return only relevant information for each nearby venue
venues_list.append([(
name,
lat,
lng,
v['venue']['name'],
v['venue']['location']['lat'],
v['venue']['location']['lng'],
v['venue']['categories'][0]['name']) for v in results])
i=i+50
nearby_venues = pd.DataFrame([item for venue_list in venues_list for item in venue_list])
nearby_venues.columns = ['Neighborhood',
'Neighborhood Latitude',
'Neighborhood Longitude',
'Venue',
'Venue Latitude',
'Venue Longitude',
'Venue Category']
print('Ok')
return(nearby_venues)
the code above has worked perfectly with me, where ven_num variable is the desired limit for calling venues in a certain neighborhood