Limit for the flagged items in Circuit REST API - circuit-sdk

I see that there is no pagination in most of the Circuit API.
One example is: GET/conversations/{convId}/messages/flag
Does that mean that I will always get all the messages at once? Is there any plans to add pagination to the UPIs?

Some of the endpoints do offer pagination but in the way that you have to specify a timestamp and a number of items to retrieve. For example GET /conversations or GET /conversations/{convId}/items can be paged in that way.
For the flagged items pagination support is currently not available due to the fact that the backend offers all available flagged items at once. We know that this is not very optimal atm and we are planning to fix such issue in a new version 3 in the future, but that may take some time until the release.

Related

Should vue filtering use REST API or url parameters

I'm designing a website with a REST API using Django Rest Framework and Vue for the front end and I'm trying to work out what the proper way to do filtering is.
As far as I can see, I can either:-
a) Allow filtering via the API by using URL parameters like /?foo=bar
or
b) Do all the filtering on the Vue side by only displaying the items that are returned that have foo=bar
Are there any strong reasons for doing one over the other?
The real answer to this question is "it depends".
Here are a few questions to ask yourself to help determine what the best approach is:
How much data will be returned if I don't filter at the API level?
If you're returning just a few records, there won't be a noticeable performance hit when the query runs. If you're returning thousands, you'll likely want to consider server side querying/paging.
If you're building an application where the amount of data will grow over time, it's best to build the server side querying from the get-go.
What do I want the front-end experience to be like?
For API calls that return small amounts of data, the user experience will be much more responsive if you return all records up front and do client-side filtering. That way if users change filters or click through paged data, the UI can update almost instantaneously.
Will any other applications be consuming my API?
If you plan to build other apps that consume the API, you may want to build the filtering at the API level so you don't need to recreate front-end filtering logic in every consuming application.
Hopefully these questions can help guide you to the best answer for your use case.
Whenever I come across this issue I ask myself just one question: How many items are you working with? If you're only returning a few items from the API you can easily do the filtering on the front-end side and save yourself a bunch of requests whenever the results are filtered. Also, if the result set is quite small, it's a lot faster to do it this way rather than sending off a request every time the filters change.
However, if you're working with a very large number of items, it's probably best to just filter them out in the API, or even via your database query if that's what you're working with. This will save you from returning a large number of results to the front-end. Also, filtering large numbers of items on the front-end can significantly impact performance since it usually involves looping over a collection.

What is the best way to bulk list items to ebay?

I have thousands of items and multiple eBay accounts.
They are mostly variation items with differing prices per color and size variation.
What's the best way to bulk list all the items?
Previously, I had everything listed one at a time, using either the addItem or addVariationItem calls.
But that can't be how large sellers manage things if in my situation.
What is the preferred way, the eBay API of choice, that best accomplishes this task?
Likewise, for updating the price and stock for all SKUs multiple times per day?
Is there a one-shot call or upload way which can do this?
There are some Bulky options with eBay APIs, but if you need to make more than 5000 calls per day you need first to build an App and pass eBay verification, and since 1 item means at least 1 call...
Elsewhere you have to use some SaaS services.
If you've already passed such step you can search for what was the LMS services. Actually they've been decommissioned last weekend but there is the new version (that's not working properly but we hope it will do soon). I suggest you to follow the migration procedure. It is not very clear, as all the eBay documentation, but IMHO is the best path.

Get github API results more than 100

I want to develop github alike issue tracker.
For that i have been working on this below api.
https://api.github.com/repos/facebook/react/issues?per_page=100
But this api results only 100 results per request as per docs.
Is there a way i can get all results of issues and not just 100,i can make multiple request but i don't think it is feasible way of doing it.
Issue object itself contain author,label,assignee so needed all results at once.
Is there any way to do it?
No, there is no way to get all of the results without pagination. GitHub, like almost all major web sites, has a time limit on the amount of time a request can take. If you have a repository with, say, 150 000 issues, then any reasonable operation on all of those issues will take longer than the timeout. Therefore, it doesn't make sense for GitHub to allow you to disable pagination in this way because the request would invariably fail anyway.
Even if you use the GraphQL API, you still get a limited number of results. If you want to fetch all of the issues, you'll need to make multiple requests.

VirtoCommerce API getting item prices

I am using VirtoCommerce 2.9 and have some questions regarding the API and what would be the best way to get all the information I need, while keeping the number of API requests down.
Right now I am using the endpoint /api/catalog/search to find items that matches a number of attributes. But the response does not include prices and product texts. Both I would like to present to the end user. What would be the correct or best way to retrieve this information?
Thanks!
Cheers!
Currently search service does not return the description and price for the products.
To get this details you need to use separate queries
api/catalog/product/ids?respGroup='ItemSmall'
to get product detail with description and
api/pricing/evaluate
to retrieve actual products prices. You can call them in parallel for better performance.
Be aware to use WithProperties response group because it may cause
perfomance problem. Anyway product returned with all properties values
and this 'response group' is only responsible for retrieving properties meta-information
(as possible dictionary values, multilingual, required or optional flag etc) this information often used in admin area and in storefront almost not used.
Indexed search module will be serious changed in future versions, and you will be able to have more control over the product details in the search index.

How to skip known entries when syncing with Google Reader?

for writing an offline client to the Google Reader service I would like to know how to best sync with the service.
There doesn't seem to be official documentation yet and the best source I found so far is this: http://code.google.com/p/pyrfeed/wiki/GoogleReaderAPI
Now consider this: With the information from above I can download all unread items, I can specify how many items to download and using the atom-id I can detect duplicate entries that I already downloaded.
What's missing for me is a way to specify that I just want the updates since my last sync.
I can say give me the 10 (parameter n=10) latest (parameter r=d) entries. If I specify the parameter r=o (date ascending) then I can also specify parameter ot=[last time of sync], but only then and the ascending order doesn't make any sense when I just want to read some items versus all items.
Any idea how to solve that without downloading all items again and just rejecting duplicates? Not a very economic way of polling.
Someone proposed that I can specify that I only want the unread entries. But to make that solution work in the way that Google Reader will not offer this entries again, I would need to mark them as read. In turn that would mean that I need to keep my own read/unread state on the client and that the entries are already marked as read when the user logs on to the online version of Google Reader. That doesn't work for me.
Cheers,
Mariano
To get the latest entries, use the standard from-newest-date-descending download, which will start from the latest entries. You will receive a "continuation" token in the XML result, looking something like this:
<gr:continuation>CArhxxjRmNsC</gr:continuation>`
Scan through the results, pulling out anything new to you. You should find that either all results are new, or everything up to a point is new, and all after that are already known to you.
In the latter case, you're done, but in the former you need to find the new stuff older than what you've already retrieved. Do this by using the continuation to get the results starting from just after the last result in the set you just retrieved by passing it in the GET request as the c parameter, e.g.:
http://www.google.com/reader/atom/user/-/state/com.google/reading-list?c=CArhxxjRmNsC
Continue this way until you have everything.
The n parameter, which is a count of the number of items to retrieve, works well with this, and you can change it as you go. If the frequency of checking is user-set, and thus could be very frequent or very rare, you can use an adaptive algorithm to reduce network traffic and your processing load. Initially request a small number of the latest entries, say five (add n=5 to the URL of your GET request). If all are new, in the next request,
where you use the continuation, ask for a larger number, say, 20. If those are still all new, either the feed has a lot of updates or it's been a while, so continue on in groups of 100 or whatever.
However, and correct me if I'm wrong here, you also want to know, after you've downloaded an item, whether its state changes from "unread" to "read" due to the person reading it using the Google Reader interface.
One approach to this would be:
Update the status on google of any items that have been read locally.
Check and save the unread count for the feed. (You want to do this before the next step, so that you guarantee that new items have not arrived between your download of the newest items and the time you check the read count.)
Download the latest items.
Calculate your read count, and compare that to google's. If the feed has a higher read count than you calculated, you know that something's been read on google.
If something has been read on google, start downloading read items and comparing them with your database of unread items. You'll find some items that google says are read that your database claims are unread; update these. Continue doing so until you've found a number of these items equal to the difference between your read count and google's, or until the downloads get unreasonable.
If you didn't find all of the read items, c'est la vie; record the number remaining as an "unfound unread" total which you also need to include in your next calculation of the local number you think are unread.
If the user subscribes to a lot of different blogs, it's also likely he labels them extensively, so you can do this whole thing on a per-label basis rather than for the entire feed, which should help keep the amount of data down, since you won't need to do any transfers for labels where the user didn't read anything new on google reader.
This whole scheme can be applied to other statuses, such as starred or unstarred, as well.
Now, as you say, this
...would mean that I need to keep my own read/unread state on the client and that the entries are already marked as read when the user logs on to the online version of Google Reader. That doesn't work for me.
True enough. Neither keeping a local read/unread state (since you're keeping a database of all of the items anyway) nor marking items read in google (which the API supports) seems very difficult, so why doesn't this work for you?
There is one further hitch, however: the user may mark something read as unread on google. This throws a bit of a wrench into the system. My suggestion there, if you really want to try to take care of this, is to assume that the user in general will be touching only more recent stuff, and download the latest couple hundred or so items every time, checking the status on all of them. (This isn't all that bad; downloading 100 items took me anywhere from 0.3s for 300KB, to 2.5s for 2.5MB, albeit on a very fast broadband connection.)
Again, if the user has a large number of subscriptions, he's also probably got a reasonably large number of labels, so doing this on a per-label basis will speed things up. I'd suggest, actually, that not only do you check on a per-label basis, but you also spread out the checks, checking a single label each minute rather than everything once every twenty minutes. You can also do this "big check" for status changes on older items less often than you do a "new stuff" check, perhaps once every few hours, if you want to keep bandwidth down.
This is a bit of bandwidth hog, mainly because you need to download the full article from Google merely to check the status. Unfortunately, I can't see any way around that in the API docs that we have available to us. My only real advice is to minimize the checking of status on non-new items.
The Google API hasn't yet been released, at which point this answer may change.
Currently, you would have to call the API and dis-regard items already downloaded, which as you said isn't terribly efficient as you will be re-downloading items every time, even if you already have them.