For our web app, which will use Amazon's API as a basis for some of the site's main interactions, we required the ability to do a generalized search of Amazon's products and return results based on relevancy. The expectation was that their API would work exactly like their actual site's search.
Unfortunately it does not. For instance, querying "joy of cooking" does not return a link to the famous cook book, but to some food processor. Contrarily, on the actual site, one would see the book isn't just first, but it and any derivations occupy the top 5 or so results.
Is there a way of getting this level of relevancy search from Amazon's API without specifying a node to browse through? We need to be able to search everything at once, and the API seems very limited on parameter sets.
The answer is that, if you use "All" as your sorting basis, rather than "Blended", you will get results that are inline with Amazon's own product search. Older docs don't seem to account for this discrepency, but testing both methods has shown "All" to be the preferred product sorting method.
http://docs.amazonwebservices.com/AWSECommerceService/2010-11-01/DG/
Pagesearch under "SearchIndex: All"
You don't get any item sorting options with this method, but if all you want is "most relevant" results, this is the preferred method.
Related
Is there any easier way to filter data in Wikidata and download a portion of claims?
For e.g., let us say that I want a list of all humans that are alive currently and have an active Twitter profile.
I would like to download a file containing their Q-ids, names and Twitter usernames (https://www.wikidata.org/wiki/Property:P2002).
I expect there to be hundreds of thousands of results, if not millions.
What is the best way to obtain this information?
I am not sure if by submitting a SPARQL query, one can collect results in a file.
I looked at MediaWiki API, but not sure if it allows accessing multiple entities in one go.
Thanks!
Wikidata currently has around 190,000 Twitter IDs linked to people. You can easily get them all using the SPARQL Query Interface: Web Interface (with a LIMIT you can remove or increase). In the dropdown on the right, choose SPARQL Endpoint for the Direct Link (no limit, 35MB .csv).
But, in case you run into timeouts with more complicated queries, you can first try LIMIT and OFFSET, or one of:
Wikibase Dump Filter is a CLI tool that downloads the full wikidata dump but filters the stream as it comes in according to your needs. You can put very much the same thing together with some creative pipe|ing and it tends to work better than one would expect.
https://wdumps.toolforge.org wdumps.toolforge.org does more or less the same thing but on-premise, then allows you to download the filtered data.
The linked data interface also works rather well for "simple query, high volume" access needs. Example here gives all Twitter IDs (326,000+) and you can read it in pages as fast as you can generate get requests (set an appropriate Accept header to get json)
I have a search functionality that gets data from HERE API's Search endpoint. I maintain records of each search's results so I can add metadata that I need for my own purposes and also so I can provide results without always going back to HERE API. The problem I have is with paginating, specifically with providing a starting index when fetching results from HERE. Similar to how Algolia does it, I want to be able to search for a term and begin with the results at a certain index, the offset. HERE API apparently doesn't allow this at all. The closest it comes to such a feature is that it provides the URL for the next search, as described here. This is limited because it doesn't allow me to start the search results at a particular index that I specify. So essentially I want to know if there's a "standard" way of getting such functionality even when it's not provided by the API.
My own solution
The HERE API provides a size parameter that allows specifying the total number of results that I want, so I can specify a larger size than I need, and basically use code to start the results from my desired index. But this feels a bit hacky, and I wonder if there's a better/more established way of doing this.
Happy to listen to any ideas! Thanks. :)
Such a kind of an 'offset' for starting the paging after a specific number of results is indeed not supported by the Places API itself.
You have to set up a workaround within your application.
I have seen that there are various APIs and various tools that allow you to see the most visited pages of the Wikimedia projects such as Wikipedia, but all these services have a limit, they do not allow to show more than 1000 pages, while I would like to have the list of 5000-10000(or more) most visited pages in order of traffic.
these are all the services that I checked and with which I found this limit:
https://en.wikipedia.org/w/api.php?action=help&modules=query%2Bmostviewed
https://stats.wikimedia.org/#/en.wikipedia.org/reading/top-viewed-articles/normal|table|last-month|~total|monthly
https://pageviews.toolforge.org/topviews/?project=en.wikipedia.org&platform=all-access&date=last-month&excludes=
https://wikimedia.org/api/rest_v1/#/Pageviews%20data
I have also found services like https://quarry.wmflabs.org/ or
https://query.wikidata.org/ where you can run a query, technically perhaps through this service you could but I don't know the query to be performed to show the pages with most visits.
I also found an interesting article here: https://www.reddit.com/r/bigquery/comments/3dg9le/analyzing_50_billion_wikipedia_pageviews_in_5/ where it is explained that it is possible to use Google's BigQuery but it is an external service and before using it I wanted to know if it existed a simpler method.
If the REST API doesn't suit your purpose, you'd need to parse the raw data yourself. That's because all the tools you've linked just consume the REST API.
The raw data are available at https://dumps.wikimedia.org/other/pageviews/. There are two groups of files there. One starts with pageviews-, which lists the number of views of individual pages, the second starts with projectviews-, which lists the number of views of individual projects.
For your target, you need the pageviews ones. Download the files for your timespan, and then analyze them using a script.
The file is space-separated. Each row represents one page that was visited in that hour. First column represents the project (en is English Wikipedia, for instance), second is the page title (spaces are represented by underscores) and then there are total pageviews.
The technical documentation is available at https://wikitech.wikimedia.org/wiki/Analytics/Data_Lake/Traffic/Pageviews.
I'm trying to organize a solr search engine. I've already set up the misspelling system and the suggestions.
However I can't seem to find how to retrieve the top 10 most searched words/terms/keywords in solr/lucene. How can I get this? I want to display those on my homepage.
Solr does not provide this kind of feature out of the box. There is the StatsComponent, that provides you with all kind of statistics, but all of those are numeric only.
Depending on how you access solr (directly or via your own app) you could intercept all calls an log the query string. I did this in a recent project where I logged a queries to a database. If you submit all keywords to an other core on your solr server, you can faceting queries on your search terms as described by Hyque
You could use a facet for retrieving the Top X words like this:
http://yourservergoeshere/solr/select?q=*&wt=xml&indent=true&facet=true&facet.query=*&facet.field=message&facet.limit=10&facet.minCount=1
The value of facet.field depends on the field you like to search in. With facet.limit you'll (obviously) limit the amount of results to 10. You'll find the facet results at the end of the results, starting with "facet_counts"
Edit: I really should go to bed earlier. I didn't see the "most searched" in your question. Sorry for that.
Apache Solr does not provide any such capability as of today. There is a desire for this and a JIRA ticket corresponding to it. You can vote for it if you'd like to see it in Solr some day: https://issues.apache.org/jira/browse/SOLR-10359.
The stats component provides information around statistics, but it's mostly numeric in nature. You could parse server logs and come up with a way to build a Frequently Searched Terms (e.g. pump those logs in SiLK or Kibana for visualization).
If you have the ability to change the front end and add some javascript code to the UI or can intercept the search request and make an async or batch calls to APIs for tracking, you can use SearchStax Analytics that provides Search Analytics that tracks searches, clicks, cart actions, revenue, etc.
I'm trying to find out if there is a programmatic way to determine how far down in a search engine's search results my site shows up for given keywords. For example, my query would provide my domain name, and keywords, and the result would return a say 94 indicating that my site was the 94th result. I'm specifically interested in how to do this with google but also interested in Bing and Yahoo.
No.
There is no programmatic access to such data. People generally roll out their own version of such trackers. Get the Google search page and use regexes to find your position. But now different results are show in different geographies and results are personalize.
gl=us parameter will help you getting results from US, you can change geography accordingly to get the results.
Before creating this from scratch, you may want to save yourself some time (and money) by using a service that does exactly that [and more]: Ginzametrics.
They have a free plan (so you can test if it fits your requirements and check if it's really worth creating your own tool), an API and can even import data from Google Analytics.