Is there anyway to find out how many webapi calls that are used against quota when doing rest API call (or soap api call) reading leads from a list? Please note that this is purely read-only, where we are getting data from a static list of leads, which are added using a smart campaign.
We are bringing in 40 attributes from the Marketo lead record total 1600 chars. Depending on need, we might need to stage anywhere from 200K to 1 million records into a static list. We are successfully extracting all that data, but we would like to find out how many API calls are being utilized.
Each authenticated request to an endpoint counts as a call. You can also the usage API to see your daily usage: http://developers.marketo.com/documentation/rest/get-daily-usage/
Related
if i developed API that return list of offers(v1/offers), or it return details for specific offer(v1/offers/12345), is it a good practice to return all offers details when calling /offers list to mobile? to Reduce calls for Offer Details API?, so instead of calling v1/offers/12345, all details for 12345 will be returned when calling v1/offers
thanks
REST does not dictate how to solve this.
It is perfectly okay for a REST API to declare a query parameter or HTTP header by which the API client can declare the required level of detail (e.g. minimal / compact / detailed).
This prevents underfetching (where the client first gets the list of offer IDs and then has to fetch every individual offer) as well as overfetching (returning details of each offer that the client wasn't even interested in) but it requires the API client to take this decision. Whether that is "good" or not depends on how many API clients you have and how different their usage requirements are.
I'm in the process of devolving a new tool for a company app. The tool will be sending homogeneous number of searches to amadeus API. Is every search result is considered as a request? A sample search of a user will have to search the api 1000 times are these searches considered as requests? Because if the company has 10000 request limit per month it's going to be over by 10 users! I need to understand this please.
Every time you call an API (every time you use GET/POST verb) you do a "request".
The limitation (quota) is only in the test environment, you don't pay for it but you have a limited number of calls and you only have access to a subset of data.
In production, you don't have any limitation on the total number of queries you can do. You get access to our full set of data (live) but you pay per use (you pay for each request you do).
You have a limitation on the number you can do per second (TPS: 10 in production / 5 in test).
Since version 2018.3, YouTrack has published a new API for administrating the system. One example of a new endpoint is /api/admin/users/ which is supposed to return the collection of users in the YouTrack instance, with a wide variety of fields being available compared to the old, deprecated, API.
However, when using it, I've found that it returns only a subset of all users in the instance; in my case, it produces only 42 out of 106 users.
As a workaround, I've used the deprecated API endpoint, /rest/admin/user/ to get all users, and called the new endpoint for each of the 106 results to get the newly available detailed information, but this is rather wasteful in the number of calls required, adds a dependency on a deprecated API, is altogether pretty wonky, and doesn't appear to be the intended workflow.
So the question becomes: How does one use the new API to get all users?
There is a default limit for a result array which is 42.
You can override it by sending /api/admin/users/?$top=<YOUR_LIMIT> , you can also send -1 to get the whole set of users (may cause performance issues).
Additionally, you can use a combination of $top and $skip get parameters to iterate through your users.
I am using the Bigcommerce API to develop a small standalone application for a client. I store product information in a local database anytime I fetch products from Bigcommerce, to reduce latency and network load. However, products can change on Bigcommerce, and while it is acceptable for my application to show mildly outdated information, I will need to update my local cache at some point. My current plan is to do this by storing the original date I requested the product, after which I will need to perform another request to refresh the cache.
My question is, given a list of products (including their Bigcommerce IDs), is there a way to request updates to all of them through a single call to the Products Resource? I can make a request for each individual product by calling:
GET {api}/v2/products/{id}
I can also request all products within an unbroken ID range by calling:
GET {api}/v2/products?min_id={value}&max_id={value}
I am able to successfully call both of the above methods, and I can chain them together in loops to fetch all products. What I want to do is request multiple products with unrelated IDs in a single call. So, something like this:
//THIS IS NOT A REAL METHOD!
GET {api}/v2/products?id[]={value1}&id[]={value2}
Is there any way I can do this? Or is there another approach to solving this that I haven't considered? My main requirements are:
Minimal API requests. My application is small but my client's bigcommerce store is not, and I will be processing tens of thousands of products. I have limited CPU and network resources available, and I simply cannot process that many requests.
Scalable. As I said, my client's store is large, and growing. I need a solution whose overhead scales at a manageable rate with number of products.
Note: my application is a small web application written in PHP running on a Linux shared hosting environment. It is a back of house system which will likely only be used by single user at a time, during standard business hours. I haven't tagged the question with PHP because my question is about the API, which is language agnostic.
One approch can be.
First get all products from BigCommerce using simple products call.
Set some interval time to get updated product list.
You can use min_date_modified and max_date_modified OR min_date_created and max_date_created in products API call to get updated products details.
I have an internal API for my company that contains a large amount of factual data (80MM records as of right now). I have four clients that connect to me on a regular basis. The main API call adds a new item to the database, verifies its authenticity, and then returns structured, analyzed data based on the item submitted.
Over time, as we identify more data to be associated with an item, I need to be able to let my clients know that records have changed.
Right now I have a /recent endpoint, which returns all of the records that have changed since $timestamp. This is fine for small data sets, but given the large number of transactions, one could easily wind up with a /recent dataset of over a million items, especially if there's a large data import.
Another idea I had was to use web hooks to push data to the clients, but then the problem becomes pushing too much data. My clients don't necessarily need updates for every single item that changed -- maybe they only need ones they've already submitted.
The question is less about code and more about design patterns or code strategies:
What are some optimal strategies for notifying my clients of updated records without flooding my clients with unnecessary requests or providing millions of records on a poll?
I've used 3rd party APIs (such as Amazon) that paginate large requests. If the data set exceeds the page limit the client needs to make another a request for the next page. This would be in combination with the /recent endpoint.
The actual implementation would be something like
{
requestId: "foobar",
page: 0,
pages: 10,
data: {
...
}
}
The client makes the request and gets the first page of data, then sends to an endpoint the requstId and the page number. Somehow you'd want to persist a reference to what data corresponds to a requestId.