Need to extend url length on IIS express and I am getting a 400 BadRequest - asp.net-core

I am trying to make a Get request against an API and the URL contains a lot of Guids.
How can set an unlimited URL limit to my IIS express in order to make this query.
Query example:
https://localhost:44329/api/user_collections/(7e9046ff-93ba-4f4f-a185-0330967e33b4,e88605f2-772f-493a-b9ee-1b561f45af57,e8796bef-cf53-47ca-85e7-1c683775c40f,2f795791-dfd7-469d-881b-217ad3aeb4f0,2f795791-dfd7-469d-881b-217ad3aeb4f0,e36c1106-e38d-40aa-816c-318e06bc9443,e36c1106-e38d-40aa-816c-318e06bc9443,dd07a986-a63a-4a8c-8f77-3944955ab5d4,46299c82-cbf8-4256-996b-4ac6ef975f09,46299c82-cbf8-4256-996b-4ac6ef975f09)

Turns out there’s a registry setting for indicating the maximum length of any individual path segment called “UrlMaxSegmentLength.” The default value is 260 characters. When our route value was longer than that, we’d get the 400 error.
The solution is to pass the value on the querystring rather than as a route value. Something to consider if you have route values that are going to be long. Of course, this isn’t a problem with routing, just that you can get into the situation pretty easily when you get into the habit of pushing parameters into the route. It might be nice in future versions of routing to have some sort of max length check when generating full route URLs so you can’t generate a path segment greater than, say, 256 characters without getting an exception.

Related

Solr How to store limited amount of text instead of fully body

As per our business requirement, I need to index full story body (consider it for a news story for example) but in the Solr query result I need to return a preview text (say, first 400 characters) to bind to the target news listing page.
As I know there are 2 options in schema file for any field stored=false/true. Only way I can see as of now is I set it to true and take the full story body in result and then excerpt text to preview manually, but this seems not to be practical because (1) It will occupy GBs of space on disc for storing full body and (2) the json response becomes very heavy. (The query result can return 40K/50K stories).
I also know about limiting the number of records but for some reasons we need complete result at once.
Any help for achieving this requirement efficiently ?
In order to display just 400 characters in the news overview, you can simply use Solr Highlighting Feature and specify the number of snippets and their size. For instance for Standard highlighter you have parameters:
hl.snippets: Specifies maximum number of highlighted snippets to generate per field. It is possible for any number of snippets from
zero to this value to be generated. This parameter accepts per-field
overrides.
hl.fragsize: Specifies the size, in characters, of fragments to consider for highlighting. 0 indicates that no fragmenting should be
considered and the whole field value should be used. This parameter
accepts per-field overrides.
If you want to index everything but store only part of the text then you can follow the solution advised here in Solr Community.

jsFiddle API to get row count of user's fiddles

So, I had a nice thing going on a jsFiddle where I listed all my fiddles on one page:
jsfiddle.net/show
However, they have been changing things slowly this year, and I've already had to make some changes to keep it running. The newest change is rather annoying. Of course, I like to see ALL my fiddles all at once, make it easier to just hit ctrl+f and find what I might be looking for, but they' made it hard to do now. Used to I could just set the limit to 99999, and see everything, but now it appears I can't go past how many i actually have (186 atm).
I tried using a start to limit solution, but when it got to last 10|50 (i tried like start={x}&limit10 and start={x}&limit50) it would die. Namely because last pull had to be exact count. Example, I have 186, and use the by 10's solution, then it would die at start=180&limit=10.
I've search the API docs but can't seem to find a row count or anything of that manner. Anyone know of a good feasible solution that wont have me overloading there servers doing a constant single row check?
I'm having the same problem as you are. Then I checked the docs (Displaying user’s fiddles - Result) and found out that if you include callback=Api parameter, an additional overallResultSetCount field is included in the JSON response. I checked your fiddles and currently you have total of 229 public fiddles.
The solution I can think of will force you to only request twice. The first request's parameters doesn't matter as long as you have callback=Api. Then you send the second request in which your limit will be overallResultSetCount value.
Edit:
It's not in the documentation, however, I think the result set is limited to 200 entries only (hence your start/limit is from 0 - 199). I tried to query more than the 200 range but I get a Error 500. I couldn't find another user whose fiddle count is more than 200 (most of the usernames I tested only have less than 100 fiddles like zalun, oskar, and rpflorence).
Based on this new observation, you can update your script like this:
I have tested that if the total fiddle count is less than 200,
adding start=0&limit=199 parameter will only return all the
fiddles. Hence, you can add that parameter on your initial call.
Check if your total result set is more than 200. If yes, update your
parameters to reflect the range for the remaining result set (in
this case, start=199&limit=229) and add the new result set to your
old result set. Else, show/print the result set you initially got from your first query.
Repeat steps 1 and 2, if your total count reaches 400, 600, etc (any
multiple of 200).

Yammer API - Paging

I am trying to gather a range of messages through the rest API, and am aware that you can only retrieve 20 results at a time. I have tried incrementing a page variable, but this has no affect, and I am just getting the same results each time no matter the page number (https://www.yammer.com/api/v1/messages.json?page=6). I have proceeded to use the newer_than and older_than parameters to page through the results, and it works to some extent, but it appears to be excluding records. I am using the following approach below:
Since just setting a newer_than only results in the 20 most recent records as long as they are newer than the id that is sent in the newer_than parameter, I am also setting a dynamic older_than parameter.
Send request with only a newer than parameter. This returns the 20 most recent records. (eg. ww.yammer.com/api/v1/messages.json?newer_than=235560157)
Extract the ID of the 20th id in the JSON, and using this to populate the older_than parameter. The result is 20 different records. (eg.ww.yammer.com/api/v1/messages.json?newer_than=235560157&older_than=405598096)
Repeat step 2 until no results are returned since the newer_than and older_than parameters will eventually overlap.
The problem is that the set of records that is returned with this method is less than the number of records that is returned for messages from the data export API. I am working under the assumption that newer message IDs are always generated with a value greater than any older messages.
Could I possibly be misunderstanding how paging through results is supposed to be implemented with the REST API?
Any help would be much appreciated!
Thanks in advance!
First of all, the page parameter works only for the search API.
Secondly, the way you are trying to fetch messages will not return any comments on the messages or will return top 2 comments on any message based on the "extended" parameter. By default it returns 2 comments on every message. To get all the comments on the message you will have to get it individually message wise.
That must be causing the difference in the number of messages in the two methods mentioned.
I agree with Farhann - The rest API endpoint returns only top two comments for any message by default. To get all the comments for a post, you have to make a separate request.
With the use of the Data Export API, all the comments along with the message (public and private) are also exported which increases the count of the number of the messages. While, the API call returns only recent 2 comments on any message by default.
The data export includes private messages. Private messages will not be returned by that API call.
Check if the messages you are not seeing are private messages.

CouchDB API query with ?limit=0 returns one row - bug or feature?

I use CouchDB 1.5.0 and noticed a strange thing:
When I query some API action, for example:
curl -X GET "http://localhost:5984/mydb/_changes?limit=1"
I get the same result with limit=1 and with limit=0 and with limit=-55. In all cases is a one row from the start of list.
Although, PostgreSQL returns:
Zero rows when LIMIT 0
Message ERROR: LIMIT must not be negative when LIMIT -55
My question is mainly concerned with the API design. I would like to know your opinions.
It's a flaw or maybe it's good/acceptable practice?
This is how the _changes api is designed. If you do not specify the type of feed i.e long-poll, continuous etc the default is to return a list of all the changes in a single results array.
If you want a row by row result of the changes in the database specify the type of feed in the url like so
curl -X GET "http://localhost:5984/mydb/_changes?feed=continuous"
Another point to note that in the _changes api using 0 has the same effect as using 1 in limit parameter.

Flickr Geo queries not returning any data

I cannot get the Flickr API to return any data for lat/lon queries.
view-source:http://api.flickr.com/services/rest/?method=flickr.photos.search&media=photo&api_key=KEY_HERE&has_geo=1&extras=geo&bbox=0,0,180,90
This should return something, anything. Doesn't work if I use lat/lng either. I can get some photos returned if I lookup a place_id first and then use that in the query, except then all the photos returned are from anywhere and not the place id
Eg,
http://api.flickr.com/services/rest/?method=flickr.photos.search&media=photo&api_key=KEY_HERE&placeId=8iTLPoGcB5yNDA19yw
I deleted out my key obviously, replace with yours to test.
Any help appreciated, I am going mad over this.
I believe that the Flickr API won't return any results if you don't put additional search terms in your query. If I recall from the documentation, this is treated as an unbounded search. Here is a quote from the documentation:
Geo queries require some sort of limiting agent in order to prevent the database from crying. This is basically like the check against "parameterless searches" for queries without a geo component.
A tag, for instance, is considered a limiting agent as are user defined min_date_taken and min_date_upload parameters — If no limiting factor is passed we return only photos added in the last 12 hours (though we may extend the limit in the future).
My app uses the same kind of geo searching so what I do is put in an additional search term of the minimum date taken, like so:
http://api.flickr.com/services/rest/?method=flickr.photos.search&media=photo&api_key=KEY_HERE&has_geo=1&extras=geo&bbox=0,0,180,90&min_taken_date=2005-01-01 00:00:00
Oh, and don't forget to sign your request and fill in the api_sig field. My experience is that the geo based searches don't behave consistently unless you attach your api_key and sign your search. For example, I would sometimes get search results and then later with the same search get no images when I didn't sign my query.