Content-Range configuration for Django Rest Pagination - dojo

6.30.15 - HOW CAN I MAKE THIS QUESTION BETTER AND MORE HELPFUL TO OTHERS? FEEDBACK WOULD BE HELPFUL. THANKS!
I need to send a content-range header to a dojo/dgrid request:
I cannot find any examples of HOW to do this. I'm not exactly sure where this setting goes (Content-Range: items 0-9/*). I have been given a great linkheaderpagination example on this question: Django Rest Framework Pagination Settings - Content-Range But I don't know how to make this work to produce a Content-Range response. Any takers or does anyone know of any good resources or examples??
UPDATE: I am trying to create pagination in Dojo/grid. I have am using a server-side api (Django Rest Framework) to supply to data to the Dojo/Dgrid. Django Rest Framework does not automatically sent content-range headers when it gets a response from Dojo. Dojo sends a range request when formatted to have pagination. I don't know now to configure the Django Rest Framework API to send a content-range header when it gets a request from Dojo. Unfortunately, I'm trying to do something very specific and just general settings on either side doesn't work.

Including Content-Range Header in response:
You just need to create a headers dictionary with Content-Range as the key and value as how many items are being returned and how many total items exist.
For example:
class ContentRangeHeaderPagination(pagination.PageNumberPagination):
"""
A custom Pagination class to include Content-Range header in the
response.
"""
def get_paginated_response(self, data):
"""
Override this method to include Content-Range header in the response.
For eg.:
Sample Content-Range header value received in the response for
items 11-20 out of total 50:
Content-Range: items 10-19/50
"""
total_items = self.page.paginator.count # total no of items in queryset
item_starting_index = self.page.start_index() - 1 # In a page, indexing starts from 1
item_ending_index = self.page.end_index() - 1
content_range = 'items {0}-{1}/{2}'.format(item_starting_index, item_ending_index, total_items)
headers = {'Content-Range': content_range}
return Response(data, headers=headers)
Suppose this is the header received:
Content-Range: items 0-9/50
This indicates that first 10 items are returned out of total 50.
Note: You can also use * instead of total_items if calculating total is expensive.
Content-Range: items 0-9/* # Use this if total is expensive to calculate

If you are talking about providing Content-Range in the response, I mentioned in an answer to another SO question (which I believe may also have originated from your team?) that there is one alternative to this header: if your response format is an object (not just an array of items), it can specify a total property indicating the total number of items instead.
Again looking for a few minutes at the DRF documentation, it appears it should be possible to customize the format of the response.
Based on the docs and reading the source of LimitOffsetPagination (which you want to be using to work with dstore, as already discussed in a previous question), if I had to take a wild guess, you should be able to do the following server-side:
class CustomPagination(pagination.LimitOffsetPagination):
def get_paginated_response(self, data):
return Response(OrderedDict([
('total', self.count),
('next', self.get_next_link()),
('previous', self.get_previous_link()),
('items', data)
]))
This purposely assigns the count to total and the data to items to align with the expectations of dstore/Request. (next and previous are entirely unnecessary as far as dstore is concerned, so you could take or leave them, depending if you have any use for them elsewhere.)

Related

Fetch All Pull-Request Comments Via Bitbucket REST API

This is how retrieve a particular pull-request's comments according to bitbucket's documentation:
While I do have the pull-request ID and format a correct URL I still get a 400 response error. I am able to make a POST request to comment but I cannot make a GET. After further reading I noticed the six parameters listed for this endpoint do not say 'optional'. It looks like these need to be supplied in order to retrieve all the comments.
But what exactly are these parameters? I don't find their descriptions to be helpful in the slightest. Any and all help would be greatly appreciated!
fromHash and toHash are only required if diffType is'nt set to EFFECTIVE. state also seems optional to me (didn't give me an error when not including it), and anchorState specifies which kind of comments to fetch - you'd probably want ALL there. As far as I understand it, path contains the path of the file to read comments from. (ex: src/a.py and src/b.py were changed -> specify which of them to fetch comments for)
However, that's probably not what you want. I'm assuming you want to fetch all comments.
You can do that via /rest/api/1.0/projects/{projectKey}/repos/{repositorySlug}/pull-requests/{pullRequestId}/activities which also includes other activities like reviews, so you'll have to do some filtering.
I won't paste example data from the documentation or the bitbucket instance I tested this once since the json response is quite long. As I've said, there is an example response on the linked page. I also think you'll figure out how to get to the data you want once downloaded since this is a Q&A forum and not a "program this for me" page :b
As a small quickstart: you can use curl like this
curl -u <your_username>:<your_password> https://<bitbucket-url>/rest/api/1.0/projects/<project-key>/repos/<repo-name>/pull-requests/<pr-id>/activities
which will print the response json.
Python version of that curl snippet using the requests module:
import requests
url = "<your-url>" # see above on how to assemble your url
r = requests.get(
url,
params={}, # you'll need this later
auth=requests.auth.HTTPBasicAuth("your-username", "your-password")
)
Note that the result is paginated according to the api documentation, so you'll have to do some extra work to build a full list: Either set an obnoxiously high limit (dirty) or keep making requests until you've fetched everything. I stronly recommend the latter.
You can control which data you get using the start and limit parameters which you can either append to the url directly (e.g. https://bla/asdasdasd/activity?start=25) or - more cleanly - add to the params dict like so:
requests.get(
url,
params={
"start": 25,
"limit": 123
}
)
Putting it all together:
def get_all_pr_activity(url):
start = 0
values = []
while True:
r = requests.get(url, params={
"limit": 10, # adjust this limit to you liking - 10 is probably too low
"start": start
}, auth=requests.auth.HTTPBasicAuth("your-username", "your-password"))
values.extend(r.json()["values"])
if r.json()["isLastPage"]:
return values
start = r.json()["nextPageStart"]
print([x["id"] for x in get_all_pr_activity("my-bitbucket-url")])
will print a list of activity ids, e.g. [77190, 77188, 77123, 77136] and so on. Of course, you should probably not hardcode your username and password there - it's just meant as an example, not production-ready code.
Finally, to filter by action inside the function, you can replace the return values with something like
return [activity for activity in values if activity["action"] == "COMMENTED"]

Why I only getting id's back in response? IGDB api

I already read the documentation and I think I am making the simplest request in the correct way, but it always returns only the IDs, instead of all the fields of the games
Documentation example: Documentation Example
The request header is fine. I know this because I can get the expected request if fields = * as querystring
this is my request:
You have to provide the fields you want inside the body.
Like this:
fields age_ratings,aggregated_rating,aggregated_rating_count,alternative_names,artworks,bundles,category,checksum,collection,cover,created_at,dlcs,expansions,external_games,first_release_date,follows,franchise,franchises,game_engines,game_modes,genres,hypes,involved_companies,keywords,multiplayer_modes,name,parent_game,platforms,player_perspectives,rating,rating_count,release_dates,screenshots,similar_games,slug,standalone_expansions,status,storyline,summary,tags,themes,total_rating,total_rating_count,updated_at,url,version_parent,version_title,videos,websites;"

Pagination rules - HTTP response header to make next request

I have a "copy data" activity making a REST call to get some data in json format. The data should then be transfered to a SQL database.
The problem is I can only get a certain amount of data for the specified HTTP request. For this I need to implement pagination rules, and I have tried to understand this documentation: https://learn.microsoft.com/en-us/azure/data-factory/connector-rest#pagination-support
The HTTP response returns the absolute URL for the next request in the header, with the field named "Link". As I can see in the documentation, it is supposed to be possible to get the value from "Link" and put it into a pagination rule.
As stated in the documentation:
Next request’s absolute or relative URL = header value in current response headers
It says supported pagination keys are AbsoluteUrl, with the value of this should be set like this:
Headers.response_header OR Headers['response_header']
Where the response_header is defined like this in the docs:
"response_header" is user-defined which references one header name in the current HTTP response, the value of which will be used to issue next request.
What I can't seem to understand is how this "response_header" can be set to reference the HTTP response header value of "Link".
You need to replace the 'response_header' placeholder with your header name.
In your case - 'Link'
Or in code editor:
"paginationRules": {
"AbsoluteUrl": "Headers['Link']"
}

What is the proper way to format a GET request to retrieve all media items of the Instagram user "self"?

Looking at https://www.instagram.com/developer/endpoints/users/ I have gathered that calling https://api.instagram.com/v1/users/self/media/recent/?access_token=ACCESS-TOKEN will return "the most recent media published by the owner of the access_token."
In sandbox mode I understand that I will receive a maximum of 20 media items from this call. I also realize that the response code has a pagination object that I can use to retrieve up to 20 more media items (see below)
"pagination": {
"next_max_tag_id": "1387272337517",
"deprecation_warning": "next_max_id and min_id are deprecated for this endpoint; use min_tag_id and max_tag_id instead",
"next_max_id": "1387272337517",
"next_min_id": "1387272345517",
"min_tag_id": "1387272345517",
"next_url": "https://api.instagram.com/v1/tags/cats/media/recent?access_token=xxx&max_tag_id=1387272337517"
}
These are the listed parameters of this Request.
PARAMETERS
ACCESS_TOKEN A valid access token.
MAX_ID Return media earlier than this max_id.
MIN_ID Return media later than this min_id.
COUNT Count of media to return.
My question is: Is there a way to structure my GET request in a way that returns all media items from a single call? I understand that this is not possible in sandbox mode, given a user with more than 20 media items. If possible please provide a detailed explanation of the COUNT parameter.
I imagine a possible GET request would look like https://api.instagram.com/v1/users/self/media/recent/?access_token=ACCESS-TOKEN?COUNT=X
Thank you. <3
When you go live mode you can use the count= param to get more than 20 in one API call, however I have noticed that count=33 is the maximum you can get for this API call, even if you anything more than 33, it only returns 33.
You still have to use the pagination in JSON response to get the next set of media items, easiest way is to use the pagination.next_url to make API call and get next set of media items

Correct response on a request for multiple objects where one does not exist

What is the correct response on a GET request for multiple objects where one or more of them does not exist? e.g.:
http://domain.net/event-list/?ids=1&ids=5&ids=3
where object with id 5 does not exist. Should I return a list with just objects 1 and 3 or should I return some kind of error? What is the correct response?
Also I wonder If the behaviour should be any different if the request is POST. For instance:
$.post('domain.net/events/bulk-edit/?ids=1&ids=5&ids=3', { public: true });
Should I just perform operation for the objects that exist or do not perform operation at all and return an error?
I know there are some debates if non-empty querystrings are ok for POST requests. I think they are alright just for this particular case where you request a subset of objects to do something with them.
Okay, I gave it some thoughts and here is what I believe were the right thing to do.
This is a bit of a headache since you're requesting multiple objects at once which is usually a WebDAV-thing, bringing wonders such as the 207/Multistatus response with it. Let me start of with saying that I think your query string has the wrong format. I think it really should look like this:
?ids[]=1&ids[]=5&ids[]=3
Now about responses on a GET request. I believe the following response codes were the right ones:
200 if any object could be found by id
400 on a missing or empty ids query parameter (unless you think no ids should translate into get all objects)
404 if none of the given ids match any object
If you want to notify the client that the request couldn't be satisfied in parts, you are free to send a Warning header along (cf RFC 2616, sec 14.46). However, if you really want to do it absolutely rightâ„¢, here's how to deal with requests where not every id is valid:
If all ids could be used to load an object, send the 200/Ok response code
If there are any ids that could not be used to load an object, redirect via 301/Moved Permanently to a new URL sans the offending ids param(s)
As for the POST request: It is my understanding that you'd like to set multiple events as public in one go? If so, I'd really change the order: Send a POST to http://domain.net/events/publish and send the ids in the post body.