Storing data from eBay FindCompletedItems Response [closed] - ebay-api

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 3 years ago.
Improve this question
I'm looking into using the findCompletedItems API request to look up historical prices on sold items. In the documentation (https://developer.ebay.com/devzone/finding/callref/findCompletedItems.html) it specifically states that you are limited to 5000 requests per day, which is fine, but it also says that you are not allowed to store the data, which makes this more difficult.
"Be aware that it is possible to use this call in such a way as to
violate the terms and conditions of your API License Agreement. Ensure
that you do not store the results retrieved from this call or use the
results for market research purposes."
Our purposes of using this data is to draw traffic to our application, which would then in turn direct traffic to eBay using our referral links, but if we have to make this request every time a user looks at a particular item then it's not going to be plausible as we'll make way more then 5000 requests a day and even if we qualified for the elevated api request limits 1.5 million would still not cut it on top of slowing down the application considerably because we can't store any data.
So I'm just wondering what eBay technically considers "storing data". Can we cache the data for 48hrs or something along those lines?
Thanks!

I don't have a definitive answer for you, but I would imagine that caching the data for a limited time would be acceptable. If you respect their API, the eBay Dev staff are very reasonable people to work with.
I suspect their prohibition of storing data is meant for longer-term API-scraping and warehousing of data meant for deep post-analysis/research/etc.
Also, know that even if you get approved for 1.5M calls per day, that doesn't apply to the findCompletedItems (fCI) call (and only applies to the other Finding API calls), and you're still limited to 5K/day on fCI.
You speak of needing to display info about specific items. Remember, you can use GetSingleItem or GetMultipleItems from the Shopping API (1.5M calls/day, if approved) to get specific item info, including ended items. No need to use precious calls to fCI to get item specific info.

Related

How does spy tools like ali hunter and ppspy get data from shopify stores [closed]

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 months ago.
Improve this question
How does spy tools like Ali-hunter and ppspy get data from Shopify stores, normally in order to get this data you'll need to use a webhook, but this only applies to your store and stores that installed your application.
PPSPY does the following.
It reads your Shopify sitemap to find the products in the store.
.../sitemap_products_1.xml
As fallback, it parses the URL:
.../collections/all?sort_by=best-selling - and tries
to find the products there.
Next, it uses the JSON URL from Shopify. There again it tries to find
all products. An example URL:
.../products.json?page=1&limit=250 - most store owners don't even know this exists.
After that, it calls the JSON URL for each product. You can get this
URL in your online store by simply opening a product page and writing
".json" after it in the URL. Example URL:
.../products/your-productname.json.
In this JSON there is a field "updated_at". This field is updated every time a change is made. Also, when an order take place (the stock is changed).
And with this, it is possible to track the sales (approximately).
They are called web scraper or crawlers. They go to your product page (going through all the links in your ecommerce) and understand the content of the page. They extract the product name and the product price. They will do that every X hours or X days and collect the information, so they don't need any webhook.
In theory you could make your page complicated enough to not make it easy to crawl, for example you could show the price with Javascript (the crawlers typically have javascript not enabled). But that would make your website less accessible, especially to Google, which is in fact another crawler.

Understanding the "Backend" [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
What exactly is backend code? Is it just APIs? Is there business logic in the backend?
For instance, if I have a timesheet app and the user enters their time for the day, is the logic to add that information to the database all in the frontend? And the backend is just for exposing an API to add/update information to the database? For this simple timesheet app, generally speaking, what would be in the frontend vs the backend?
This is a very broad question/topic, but I'll try to explain my view on those terms and how to decide where to put what:
In general, the "front-end" is what the user will see. To make this possible, that code has to be "stored and executed" on the users device. I'm quite certain that this means (experienced) users will be able to look into the code of the frontend and might be able to understand how it works. (I'll later explain more about why that is relevant.)
The "back-end" on the other side is usually provided as an "as is" service that somebody is offering, which is reachable via one or multiple of the many available communication protocols and its interface is usually documented in some way, even if it's not public. The most prominent examples nowadays are REST APIs and Graphql APIs.
As you already mention, centralized state management (e.g. storing data) can be an essential part of that (, but it doesn't have to).
When "making a call" to the back-end, some code gets executed on some server and the response (if any) is the only new information the frontend or user get to know.
What goes where?
There are many aspects to consider to make the decision what part of the code goes where. There is no silver bullet: I'm sure examples of all possible combinations can be found.
There can be many different front-ends for a single back-end, based on user preference (e.g. browser based, mobile apps, command line interface, ...).
They can have different release cycles and update mechanisms, so changes to the back-end might need to stay backwards compatible.
For security, operational or data consistency reasons, you might need to implement handling of wrong/invalid input in the back-end, especially if the (kind of) communication protocol changes. Especially since offering a frontend also means that it's possible to know how to call the back-end, so it's also possible to call it differently (be it on purpose or by accident).
Since operations between front-end and back-end are most likely async, certain error handling (like connection issues) can only be handled on front-end side.
Authentication and authorization / secrets management: If you back-end is only an API to the database, it needs to know the right credentials, so they need to be delivered to the user in some way and can be inspected (and potentially "mis"used)
same for business logic: there might be intellectual property or strategic reasons for not delivering it into the hand of users or potential competing companies.
And of course you need to consider how much resources you have available to implement a solution that suites your needs.

API URL Structure [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I'm creating a sweepstakes application that is powered by an API. The hierarchy is fairly straightforward.
Clients
Users
Sweepstakes
Submissions
Clients can have users who are essentially admins of any sweepstakes. Clients can also have multiple sweepstakes. And a single sweepstakes can have multiple submissions. Okay, not complicated.
Where I get confused is what the correct approach is towards the URL structure. I've read documentation and best practice blogs all over the internet, and yet I'm still confused. Here are our current routes:
CLIENTS
POST /clients
GET /clients
GET /clients/:client_id
PUT /clients:client_id
USERS
POST /users
GET /users
GET /users/:user_id
PUT /users/:user_id
DELETE /users/:user_id
SWEEPSTAKES
POST /sweepstakes
GET /sweepstakes
GET /sweepstakes/:sweepstakes_id
PUT /sweepstakes/:sweepstakes_id
DELETE /sweepstakes/:sweepstakes_id
SUBMISSIONS
POST /submissions
GET /submissions
GET /submissions/:submission_id
PUT /submissions/:submission_id
DELETE /submissions/:submission_id
As you can see, I'm following a simple 2 URLs per resource -- what I feel is best practice. You can then drill into associations via query parameters on any GET request (e.g. /submissions?sweepstakes_id={sweepstakes_id}, /sweepstakes?client_id={client_id}, etc.).
This of course makes sense to me, however my co-worker and I are in a tiff because he is using Backbone to build the primary front-end app. Backbone states that they support RESTful API consumption off the bat, but my co-worker is telling me that Backbone prefers a URL structure that represents that hierarchal structure. I of course think that will leads to a messy, long, and overall confusing URL structure. Ideally, my co-worker would like to see the following URL structures:
GET /clients
GET /clients/users
GET /clients/sweepstakes
GET /clients/sweepstakes/submissions
Note: the above routes would also have additional routes to compliment single resources via an additional resource id in the URL (e.g. /clients/users/:user_id, /clients/sweepstakes/:sweepstakes_id/submissions, etc.).
I know this is somewhat of a touchy subject, but I'd love to hear some feedback on this. I vote for one 2 URLs per resource, and if any associations need to be made, they can be done so through the use of GET, or POST parameters. But I could be totally wrong.
You're right, this is a controversial topic. That being said I've found the gang at Apigee to be very helpful when trying to flesh out api standards/definition. They do not say that one way is correct, they more lean towards educated recommendations based off of their industry experience.
They offer some great resources and have some content that may help you get to a position where you can defend your opinion with someone else in your corner.
Check out this webinar... it is fairly basic stuff but always great for a refresher when it comes to api design.
note: I have no affiliation with Apigee, I just think that they have done some great work trying to get a standard defined for api design.
~ good luck and I hope this helps
I can't find anything in the official definition of REST that suggests that hierarchical URIs are preferred, but I may have missed something. I don't know anything about Backbone, but if it needs that structure I guess you have to do it, but there does not appear to be any reason why it would be better outside of that. However it is often preferred that each resource have one unique URI and the hierarchical one appears to have several. I know that you can't speak for your workmate, but has he offered any tangible reason why his scheme would have better functionality?

Need Help Planning Architecture for Categorization Connundrum [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I have more of a "what-would-you-do" question than an actual coding question.
It relates to a project I am currently working on. In this project,
we are tasked with combining several marketplace APIs into one interface. Each
API has its own unique way of categorizing products. The top-level
parent categories for all the APIs we are looking at are more-or-less
the same with some variations. But, the subcategories are wildly
different.
For example, one API requires long bread-crumb trails
to select a category, such as: Sports > Ball Sports > New England >
Football > Active Teams > Patriots > Memorabilia. While another API has two-level
categorization: Sports > Patriots Memorabilia. In many cases, there are sub-categories
that don't relate whatsoever to the subcategories of other APIs.
So, the question is - what is the best approach to take when designing
the interface? We are currently wrestling between two possibilities:
1) Design a custom category UI on the client and then build logic into
the server that is able to sort through the needs of the various APIs
based on user-selected choices.
2) Create the UI in such a way that the user has to walk through the
necessary steps for each individual API. Depending on user settings, this means that
he may need to fill out API - specific information 5,6,10, or more
times.
While I am told that option number one is a real programming nightmare (the
example I am given is changing API data fields) I feel strongly that option number two will piss off customers.
Any ideas out there??
This a very hard problem. If you search for "ontology product classification" you'll find many research papers and discussions on the topic. If one was simply a more detailed version of the other it would be quite feasible, but your description implies that isn't the case, and thus you'll need to construct your own classification scheme and map the others onto it.
Do you have a common key (UPC code? or other) that will allow you to verify your mapping between the different product categories? If so, you might be able to construct your own categorization scheme and then map the others onto it with some degree of success.
Clearly the first option is the best for the consumer but it could be very hard and very time consuming to construct such a mapping and it will need constant updates.
One approach would be to construct a simpler hierarchy than any of the ones provided. A simpler hierarchy will make the [mostly manual] effort of mapping categories into your hierarchy much easier as most will simply be inclusions. This might make the user experience worse but if you add great search capabilities and great "related products" / "people who bought this also bought this" tools around the product browsing experience you can probably make up for the lack of hierarchy.
Number One isn't that bad of a nightmare. Your users' experience is the number one priority; never forget that. If the user has an easier time navigating a shorter route, then give them that opportunity. Also, I would wrap the API with some abstraction so my code doesn't know about the API at all and only knows about the abstraction layer. This way I can change APIs and leave most of my code alone, only changing the abstraction layer.
Use a session to pass data from page to page and a factory to create the page's state on entry based on the session data; this will strengthen the context between page, state, and user data.
Keep first level objects (the ones the page directly talks to) in context to the page; this will help when diagnosing problems.
Most importantly, create a series of tests for your abstraction layer that test every object, function, and input output pair to make sure your application is rock solid.
You have to provide to your clients a consistent not changing interface.
I would like to see an example of the two different approaches you have in mind.
API.find( PRODUCT, CATEGORIES_LIST )

How to effectively collect information for a company? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 12 years ago.
Improve this question
Please feel free to move this to meta/superuser if this is the wrong place. But this is a developer related question.
I have a smallish company with about 10 employees (developers). Often when I am browsing the internet, I come across various techniques and methods which I would like to share with them. Now one way is to simply point them to those links, but that's not too effective as sometimes the link dies, our connectivity is down, people may want to add some comments/thoughts etc.
I am wondering what is the best way to organize all this data. Couple of questions:
Should I use a SO clone? Wiki? Digg clone?
Personally I dont want to use a wiki. I find it to be a pain to create links manually. I just want to post stuff and links and select an appropriate category and people can then view and comment etc.
How to get everyone involved in this process? SO does it well by giving points to users.
How does your company manage information?
Thank you for your time.
I quite liked a process once upon a time.
Start a knowledge base within the company using Blog/Wiki/SharePoint. SharePoint is nice in the fact that it is basically setup and go. You can modify to specific needs down the line. With this you should allow your staff to add posts or blog entries etc, and then once a week/month/whenever you should have a half day "learning" session.
In this session everyone can share idea's and "nice-finds" and then share with their fellow staff; alternatively, you give each member of the team the opportunity to "teach" a session whereby they can share a technology they've found and basically pitch it to the team.
This gives the following:
Adds to teamwork
Gives opportunities to change the way they work, by introducing new technologies
Active learning is always better than passive
The problem comes with people who are introverted, non-confident or simply do not have the time to give lessons, all of which can be overcome by lowering load, allow some to do written presentations, etc.
Hope this helps.
Use a wiki or a blog. Preferably one with both. That way they can search for things and you encourage them to post their own information. Its not easy to get everyone on board but keep trying.
I find the best way to get people involved is by example. Post good stuff and not just 'stuff I found to day about blah....' I read pages out there that all do it link to some new announcement or another - waste of time I think. Better to post somethings of relevance, but not just links. Put some comments along with links.