Calculating Steam Inventory Value - api

I don't have problem with getting inventory items. But I can't calculate items prices efficiently.
Valve doesn't have an api for prices.
What i have tried (using "steamcommunity.com") (javascript for example):
itemHashNameArray.forEach((hashname) => {
let url = `https://steamcommunity.com/market/priceoverview/&appid=730&market_hash_name=${hashname}`;
let itemDetails = steamApi(url);
//"steamApi" function just sends get request to site and returns response as json.
let itemPrice = parseFloat(itemDetails.lowest_price.split(" ")[0].replace(",", "."));
})
"steamcommunity.com/market" allows you to get 1 item price per request.
It is very slow / not efficient. Also steam blocks you after so many request.
Third partie apis that I found, allows 1 item per request too.
And they don't even support other currencies except dollar.
I need to calculate for other currencies too.
Is there faster and better way/api?

Just to clarify what's going on:
Most webpages and APIs have limits as far as requests, this helps prevent malicious DoS attacks or simply too much of a resource drain (often costly too). I've ran into this trouble with trying to pull Wikipedia/DBPedia information.
While this is annoying, often times a company or a third-party will recognize the need for doing bulk transactions against their data, and open up an API (usually for a fee): https://partner.steamgames.com/doc/gettingstarted
Some third parties include SteamApis and Steamlytics.
There is a whole subreddit dedicated to this too, here's a relevant post for the API call to get all of a user's inventory items: https://www.reddit.com/r/SteamBot/comments/jey4sg/help_i_am_trying_to_make_a_script_that_counts_the/
Potentially another option is to roll-your-own API service, which could somehow pull inventory data at a rate that doesn't hit limits. Not sure of the legality of this or what kind of hoops third parties have had to jump through, and it would be the most costly probably unless you start charging for its usage. I think loading the full html and then writing a parser to dissect some of that information with straight Javascript or JQuery would alleviate some of the calls looking like spam, but that probably has a limit too and I'm unsure that would include all of the relevant information easily available.

Related

Does it make sense to define a GET /users/{id}/photos route or should I just send multiple GET /photos/{id} requests to return a user's photos?

This is a dilemma I find myself facing very often when dealing with nested resources.
So suppose the target user has n photos. Does it make sense to define a GET /users/{id}/photos route or should I just send n GET /photos/{id} requests by first requesting the User object and then looping through that User's photo_ids attribute?
From a best practices perspective, I would not send several request for such similar resources. In that scenario, you end up creating more work for yourself by having to rate limit on the backend, in order to keep your users with a lot of pictures from blowing up your server. That would be impactful to your UX as well.
I recommend you format your route as such:
GET /users/photos?id={{id}}
And return all associated photos with that user ID all at once. You can always limit that to X number of photos per call too, and paginate:
GET /users/photos?id=658&page={{1,2,3, etc.}}
My personal preference is always to try to keep variable data in the URL parameters. Having spent the afternoon with several unrelated APIs, I can tell you a whole slew of developers agree.

Bigcommerce - request products based on a list of IDs

I am using the Bigcommerce API to develop a small standalone application for a client. I store product information in a local database anytime I fetch products from Bigcommerce, to reduce latency and network load. However, products can change on Bigcommerce, and while it is acceptable for my application to show mildly outdated information, I will need to update my local cache at some point. My current plan is to do this by storing the original date I requested the product, after which I will need to perform another request to refresh the cache.
My question is, given a list of products (including their Bigcommerce IDs), is there a way to request updates to all of them through a single call to the Products Resource? I can make a request for each individual product by calling:
GET {api}/v2/products/{id}
I can also request all products within an unbroken ID range by calling:
GET {api}/v2/products?min_id={value}&max_id={value}
I am able to successfully call both of the above methods, and I can chain them together in loops to fetch all products. What I want to do is request multiple products with unrelated IDs in a single call. So, something like this:
//THIS IS NOT A REAL METHOD!
GET {api}/v2/products?id[]={value1}&id[]={value2}
Is there any way I can do this? Or is there another approach to solving this that I haven't considered? My main requirements are:
Minimal API requests. My application is small but my client's bigcommerce store is not, and I will be processing tens of thousands of products. I have limited CPU and network resources available, and I simply cannot process that many requests.
Scalable. As I said, my client's store is large, and growing. I need a solution whose overhead scales at a manageable rate with number of products.
Note: my application is a small web application written in PHP running on a Linux shared hosting environment. It is a back of house system which will likely only be used by single user at a time, during standard business hours. I haven't tagged the question with PHP because my question is about the API, which is language agnostic.
One approch can be.
First get all products from BigCommerce using simple products call.
Set some interval time to get updated product list.
You can use min_date_modified and max_date_modified OR min_date_created and max_date_created in products API call to get updated products details.

How best to notify clients of changes in data via API

I have an internal API for my company that contains a large amount of factual data (80MM records as of right now). I have four clients that connect to me on a regular basis. The main API call adds a new item to the database, verifies its authenticity, and then returns structured, analyzed data based on the item submitted.
Over time, as we identify more data to be associated with an item, I need to be able to let my clients know that records have changed.
Right now I have a /recent endpoint, which returns all of the records that have changed since $timestamp. This is fine for small data sets, but given the large number of transactions, one could easily wind up with a /recent dataset of over a million items, especially if there's a large data import.
Another idea I had was to use web hooks to push data to the clients, but then the problem becomes pushing too much data. My clients don't necessarily need updates for every single item that changed -- maybe they only need ones they've already submitted.
The question is less about code and more about design patterns or code strategies:
What are some optimal strategies for notifying my clients of updated records without flooding my clients with unnecessary requests or providing millions of records on a poll?
I've used 3rd party APIs (such as Amazon) that paginate large requests. If the data set exceeds the page limit the client needs to make another a request for the next page. This would be in combination with the /recent endpoint.
The actual implementation would be something like
{
requestId: "foobar",
page: 0,
pages: 10,
data: {
...
}
}
The client makes the request and gets the first page of data, then sends to an endpoint the requstId and the page number. Somehow you'd want to persist a reference to what data corresponds to a requestId.

WCF Services, what's better, one big request or a lot of little ones?

I'm reviewing some code where we've had some issues with return data from a WCF web service. Currently the service makes a list of objects, serializes them (as JSON for the record) and returns the entire serialized list down the wire. Obviously when there's a lot of data users run into quota limit problems.
I'm considering changing it so the service returns one item at a time which would send a bunch of requests on a loop adding one object at a time into the list until it was done.
Obviously in scenario one we're making one request to the service that has the potential to return a massive amount of data and run up against the quota. In the other scenario we never hit the quota but the requesting app will be requesting data item after data item in a stream of separate requests.
To illustrate we have a list of items which come in a variety of item types and those types come at a variety of price points. The app might want to aggregate a number of items, the customers who want that item, the types of item and price requested by the customer and their could be maybe seventy items with between five and eighty customers each requesting on average 2 types of product at 1 price each.
Taking averages at the extreme end this could make 7000 separate (very small) data requests in a single complete job. Is that a problem? It is possible to package it up a bit so that customer types and prices requested could be bundled but that's still potentially a couple of thousand requests at one time.
Am I better off with a single huge data stream? Or a couple of thousand smaller ones?
You're better off with the optimal sized return for your scenario :) It kinda depends on the overhead on the request. Generally the less chatter back and forth to a web service the better.
Facetious answer, so here's the rub:
You're probably best off with some sort of paging system, wherein your request asks for a specific number of items, and your response returns a "n of m" in the results. That way you can tune the number of requests and size of the response to perform best in your situation.

eCommerce Third Party API Data Best Practice

What would be best practice for the following situation. I have an ecommerce store that pulls down inventory levels from a distributor. Should the site, for everytime a user loads a product detail page use the third party API for the most up to date data? Or, should the site using third party APIs and then store that data for a certain amount of time in it's own system and update it periodically?
To me it seems obvious that it should be updated everytime the product detail page is loaded but what about high traffic ecommerce stores? Are completely different solutions used for that case?
In this case I would definitely cache the results from the distributor's site for some period of time, rather than hitting them every time you get a request. However, I would not simply use a blanket 5 minute or 30 minute timeout for all cache entries. Instead, I would use some heuristics. If possible, for instance if your application is written in a language like Python, you could attach a simple script to every product which implements the timeout.
This way, if it is an item that is requested infrequently, or one that has a large amount in stock, you could cache for a longer time.
if product.popularityrating > 8 or product.lastqtyinstock < 20:
cache.expire(productnum)
distributor.checkstock(productnum)
This gives you flexibility that you can call on if you need it. Initially, you can set all the rules to something like:
cache.expireover("3m",productnum)
distributor.checkstock(productnum)
In actual fact, the script would probably not include the checkstock function call because that would be in the main app, but it is included here for context. If python seems too heavyweiaght to include just for this small amount of flexibilty, then have a look at TCL which was specifically designed for this type of job. Both can be embedded easily in C, C++, C# and Java applications.
Actually, there is another solution. Your distributor keeps the product catalog on their servers and gives you access to it via Open Catalog Interface. When a user wants to make an order he gets redirected in-place to the distributor's catalog, chooses items then transfers selection back to your shop.
It is widely used in SRM (Supplier Relationship Management) branch.
It depends on many factors: the traffic to your site, how often the inventory levels change, the business impact of displaing outdated data, how often the supplers allow you to call their API, their API's SLA in terms of availability and performance, and so on.
Once you have these answers, there are of course many possibilities here. For example, for a low-traffic site where getting the inventory right is important, you may want to call the 3rd-party API on every call, but revert to some alternative behavior (such as using cached data) if the API does not respond within a certain timeout.
Sometimes, well-designed APIs will include hints as to the validity period of the data. For example, some REST-over-HTTP APIs support various HTTP Cache control headers that can be used to specify a validity period, or to only retrieve data if it has changed since last request.