How does spy tools like ali hunter and ppspy get data from shopify stores [closed] - api

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 months ago.
Improve this question
How does spy tools like Ali-hunter and ppspy get data from Shopify stores, normally in order to get this data you'll need to use a webhook, but this only applies to your store and stores that installed your application.

PPSPY does the following.
It reads your Shopify sitemap to find the products in the store.
.../sitemap_products_1.xml
As fallback, it parses the URL:
.../collections/all?sort_by=best-selling - and tries
to find the products there.
Next, it uses the JSON URL from Shopify. There again it tries to find
all products. An example URL:
.../products.json?page=1&limit=250 - most store owners don't even know this exists.
After that, it calls the JSON URL for each product. You can get this
URL in your online store by simply opening a product page and writing
".json" after it in the URL. Example URL:
.../products/your-productname.json.
In this JSON there is a field "updated_at". This field is updated every time a change is made. Also, when an order take place (the stock is changed).
And with this, it is possible to track the sales (approximately).

They are called web scraper or crawlers. They go to your product page (going through all the links in your ecommerce) and understand the content of the page. They extract the product name and the product price. They will do that every X hours or X days and collect the information, so they don't need any webhook.
In theory you could make your page complicated enough to not make it easy to crawl, for example you could show the price with Javascript (the crawlers typically have javascript not enabled). But that would make your website less accessible, especially to Google, which is in fact another crawler.

Related

Storing data from eBay FindCompletedItems Response [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 3 years ago.
Improve this question
I'm looking into using the findCompletedItems API request to look up historical prices on sold items. In the documentation (https://developer.ebay.com/devzone/finding/callref/findCompletedItems.html) it specifically states that you are limited to 5000 requests per day, which is fine, but it also says that you are not allowed to store the data, which makes this more difficult.
"Be aware that it is possible to use this call in such a way as to
violate the terms and conditions of your API License Agreement. Ensure
that you do not store the results retrieved from this call or use the
results for market research purposes."
Our purposes of using this data is to draw traffic to our application, which would then in turn direct traffic to eBay using our referral links, but if we have to make this request every time a user looks at a particular item then it's not going to be plausible as we'll make way more then 5000 requests a day and even if we qualified for the elevated api request limits 1.5 million would still not cut it on top of slowing down the application considerably because we can't store any data.
So I'm just wondering what eBay technically considers "storing data". Can we cache the data for 48hrs or something along those lines?
Thanks!
I don't have a definitive answer for you, but I would imagine that caching the data for a limited time would be acceptable. If you respect their API, the eBay Dev staff are very reasonable people to work with.
I suspect their prohibition of storing data is meant for longer-term API-scraping and warehousing of data meant for deep post-analysis/research/etc.
Also, know that even if you get approved for 1.5M calls per day, that doesn't apply to the findCompletedItems (fCI) call (and only applies to the other Finding API calls), and you're still limited to 5K/day on fCI.
You speak of needing to display info about specific items. Remember, you can use GetSingleItem or GetMultipleItems from the Shopping API (1.5M calls/day, if approved) to get specific item info, including ended items. No need to use precious calls to fCI to get item specific info.

Hiding products behind a form [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I’m building a webshop that sells tires. I think it would be best user-friendly-wise to hide my products behind a search form, where you can select tire dimension, price range etc.
I’ve been told that Google will never submit a form, when crawling a site, so if I “hide” the products by using a form, does Google ever index my products?
If no, how do I best work around this? I’ve been thinking about doing a regular menu with category submenus (By brand, price range, speed limit etc.), so that Google can crawl my links and then replace the menu with a form using javascript. Then Google will crawl the links and the user will browse by form. But if I have 3000 products, could it cause duplicate content, flag for link spam (if there is such a thing) etc. ?
If the only way to find your products is to complete and submit a form then, no, Google nor any other search engine will be able to find and index that content. To get around this you have a few options:
Have an HTML sitemap on your site that also links to your products. Besides being a good way to generate internal links with good anchor text, it also allows search engines an alternative means to find that content.
Submit an XML sitemap. This is similar to an HTML sitemap except it is in XML and not publicly visible.
Use progressive enhancement and have a menu available to users who don't have JavaScript turned on. Then using JavaScript recreate your form functionality (assuming this increases usability).
You shouldn't run into any duplicate content issues unless you can get to same product using more then one URL. None of the above should cause that to happen. But if how you implemented your products can cause this to happen just use canonical URLs to identify the main URL. Then if the search engines see multiple pages for the same content they know which one is the main page and to include it in their search results.
To avoid any on-site duplicate content issues, you can use the canonical tag to indicate the primary content page. This works quite well for ecommerce sites where there are often multiple ways to reach a product listing.
Another way having a separate page for each product is helpful for SEO is that this can be used to create a good link for your visitors to share via social networking, forums, blogs and so forth. So, instead of sharing something like mytirestore.com?q=24892489892489248&p=824824 they can share something like mytirestore.com/firestone/r78-13. This kind of keyword targeted external link will also work wonders for you SEO for product specific keywords.

How to effectively collect information for a company? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 12 years ago.
Improve this question
Please feel free to move this to meta/superuser if this is the wrong place. But this is a developer related question.
I have a smallish company with about 10 employees (developers). Often when I am browsing the internet, I come across various techniques and methods which I would like to share with them. Now one way is to simply point them to those links, but that's not too effective as sometimes the link dies, our connectivity is down, people may want to add some comments/thoughts etc.
I am wondering what is the best way to organize all this data. Couple of questions:
Should I use a SO clone? Wiki? Digg clone?
Personally I dont want to use a wiki. I find it to be a pain to create links manually. I just want to post stuff and links and select an appropriate category and people can then view and comment etc.
How to get everyone involved in this process? SO does it well by giving points to users.
How does your company manage information?
Thank you for your time.
I quite liked a process once upon a time.
Start a knowledge base within the company using Blog/Wiki/SharePoint. SharePoint is nice in the fact that it is basically setup and go. You can modify to specific needs down the line. With this you should allow your staff to add posts or blog entries etc, and then once a week/month/whenever you should have a half day "learning" session.
In this session everyone can share idea's and "nice-finds" and then share with their fellow staff; alternatively, you give each member of the team the opportunity to "teach" a session whereby they can share a technology they've found and basically pitch it to the team.
This gives the following:
Adds to teamwork
Gives opportunities to change the way they work, by introducing new technologies
Active learning is always better than passive
The problem comes with people who are introverted, non-confident or simply do not have the time to give lessons, all of which can be overcome by lowering load, allow some to do written presentations, etc.
Hope this helps.
Use a wiki or a blog. Preferably one with both. That way they can search for things and you encourage them to post their own information. Its not easy to get everyone on board but keep trying.
I find the best way to get people involved is by example. Post good stuff and not just 'stuff I found to day about blah....' I read pages out there that all do it link to some new announcement or another - waste of time I think. Better to post somethings of relevance, but not just links. Put some comments along with links.

What does eCommerce programming involve? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I'm trying to land my first programming-related job, and I found a website for a company which is accepting resumes for an eCommerce development position.
This is the requirements they listed:
To be proficient in:
HTML (hand-coded)
CSS
PHP
Javascript
MySQL
Preferred skills:
PEARL
Linux
The fact that they (unless they're actually using the PEARL programming language) misspelled perl and have a fairly bland portfolio aside, I can do all of this--I mean, I need to touch up on my Javascript and learn a bit more MySQL--but I can do all this, and I'm sure I can pick up perl in no time. But I was wondering--what exactly does an eCommerce developer do? Is this like, building shopping carts? User login systems? Or does it just mean doing everything except design on corporate websites?
eCommerce has one big word that goes with it Security.
Do you feel confident writing secure code? Bearing in mind that your code will be handling the users credit card information.
Now, there is alot that goes into building an eCommerce solution from the ground up
Product Listings
Adding/Removing Items
Sort by size/shape/price/color/...
Search
Filtering results
Shopping cart (harder then it sounds)
Database or Session?
Adding/Removing Items
Checkout
Integration with payment API
Reporting
Inventory
Security
XSS
SQL Injections
I would suggest that ecommerce is so much more than a specific technology. ECom is more about how the database is built and the features that are required. There is a good book that I read 10 years (a long time) ago that goes into ecommerce with asp classic. But there are many new ones using newer technologies here.
The big key is how you structure your data, products, options, orders, order details, credit card/user data, etc. Also, the various ways of processing transactions. How to handle order pipelines. When to offer navigations away from the current page and when not too. How to make product recommendations. Dealing with tax API's and shipping API's. You might consider downloading DashCommerce (a .net application) or something similar that fits your preferred technologies to see how they have set things up. Install something. Get it set up to feel the pains for data management. ...also to feel the pains of navigating a shopping cart (adding products to the cart, updating the cart, checking out, setting up an account or having an anonymous checkouts).
Being an commerce developer generally means knowing how to work with Verisign (now paypal) or similar payment processing. How to intercept fraudulent transactions and deal with them appropriately. How to work in a high transaction environment (caching, tierd architectures, queues, web services). Cross linking products based on user history/profiling to maximize transactions (think candy at the check out stand of a grocery store). Knowing how to work in a secure manner with sensitive data which generally means encryption techniques, setting up DMZ's, working with proxies, etc. Take a look at using some form of a rule engine for order pipelines so that your business rules are separate from your application logic. Understand coupons schemes, discounts, etc. Frequently ad campaigns are heavily used for generating side income.
Ecommerce can be a big topic!
It all depends on what you are working with.
I have been working as an e-commerce developer for half a year now.
I have used the Magento platform for all of my work.
Since standard Magento is already very secure you won't have to do much security code.
Mostly you change the layout and the design of the standard Magento shop and add any new features the client wants.
Most of these can be achieved by downloading custom modules built by other developers or you can build them yourself. Building a Magento module the right way is quite difficult for someone who is kind of new to programming or new to Magento.
I know this topic is rather old, but i thought someone might still benefit from this answer.

how to get the googlebot to get the correct GEOIPed content? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
OK. This problem is doing my head in. And I don't know if there even IS a definitive answer.
We have a website, lets call it mycompany.com. It's a UK-based site, with UK based content. Google knows about it, and we have done a load of SEO on it. All is well.
Except, we are about to relaunch my company, the GLOBAL brand, so we now need mycompany.com/uk, mycompany.com/us, and mycompany.com/au, for the various countries local content. We are using GEOIP, so if someone from the US loads mycompany.com, they get redirected to mycompany.com/us etc.
If someone isn't in one of those three countries (US, Australia, or UK) they get the UK site.
This is all well and good, but we dont want to lose the rather large amount of Google juice we have on mycompany.com! And worse, the Google bot appears to be 100% based in the US, so the US site (which is pretty much out LEAST important one of the three) will appear to be the main one.
We have thought about detecting the bot, and serving UK content, but it appears Google may smack us for that.
Has anyone else come across this situation, and have a solution?
As long as Google can find mycompany.com/uk and mycompany.com/au, it'll index all three versions of the site. Your domain's Google juice should apply to all three URLs just fine if they're on the same domain.
Have you thought about including links for different sites on the homepage? Google could follow those and index their content as well - in turn indexing the UK content.
If you instead using uk.mycompany.com, us. mycompany.com etc, then you can
register them with google webmaster tools and specifically tell google which country they are from.
This might still work with folders rather than subdomains, but I haven't tried it.
One way to get round that, thinking about it, would be to 301 redirect uk.mycompany.com to mycompany.com/uk, then you'd be telling Google, as well as keeping your existing structure.
#ross: yes, we have links between the sites. It' just the home page, and which one comes up when someone searches for "my company" in google.
Thanks!
google alerts just brought me to this thread.
The domain name that was previously used in your question is my blog and the domain name is not for sale.
Are you just using this as an example domain - for the purpose of this discussion only? The convention is to use example.com as it is reserved for this exact purpose.
Some clarification would be appreciated.