Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
I have an extension in the Chrome Web Store and I like knowing roughly how many people are using it via the "N users" and ratings on its page.
However, I don't really like loading the whole "product" page just to see a couple of numbers and thought I'd try to make a little widget that would display it instead. However, I can't find any API documentation for the Chrome Web Store.
I would a call like /webstore/api/v1/appid.json to exist, but the closest things I've found in searching only concern the Licensing API.
Is there an official Chrome Web Store API for user metrics?
This is no such API.
You can use Google Analytics inside an extension to track users manually.
If you don't need anything fancy, just a number of installs and users, there is My Extensions extension, it will track those numbers for you.
Copy and paste the snippet below wherever you want in the body of a html document saved with a ".php" extension.
<?php
//URL of your extension
$url = "https://chrome.google.com/webstore/detail/ddldimidiliclngjipajmjjiakhbcohn";
//Get the nb of users
$file_string = file_get_contents($url);
preg_match('#>([0-9,]*) users</#i', $file_string, $users);
$nbusers = str_replace(",", "",$users[1]);
echo $nbusers; //Display the number of users
?>
You can also do this client-side only (at least on your end) by using a cross-domain tool. This snippet will grab the number of users displayed on the Chrome webstore page for an extension (up-to-date as of April 28, 2018):
var chromeExtensionWebstoreURL = 'https://chrome.google.com/webstore/detail/background-image-for-goog/ehohalpjnnlcmckljdflafjjahdgjpmh';
$.getJSON('http://www.whateverorigin.org/get?url=' + encodeURIComponent(chromeExtensionWebstoreURL) + '&callback=?', function(response){
var numUsers = ((""+response.contents.match(/<span class="e-f-ih" title="([\d]*?) users">([\d]*?) users<\/span>/)).split(",")[2]);
console.log(numUsers);
});
In the future, Google may change the class name of the user count span, in which case you just need to update the regex appropriately.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
My team kinda like TestCafe, but there are some reservations against adopting it. The main one being support for Gherkin integration. The gherkin-testcafe npm package and the sample https://github.com/helen-dikareva/testcafe-cucumber-demo seem not ready for primetime yet.
Is it a more reliable way of supporting BDD at the moment?
I'm from the TestCafe Team. Fow now, we don't plan to add this functionality in the near future. But I guess gherkin-testcafe is a nice module to start with. This is an open-source module, so there is a good chance that the community will add the required functionality. If you wish, you may go ahead and do this by yourself.
After a conversation with colleagues here at the office, we concluded that the best way to
keep our BDD process,
use TestCafe and
write tests in Typescript without adding dependencies on javascript packages that are not the most realiable
was to just use some conventions while writing the TestCafe tests. For example, let's say you are given the following Gherkin file:
Feature: Application
As an administrator
I want to be able to view and manage Applications in my account
Scenario: Verify creating and deleting an application
Given I am in the login page
When I login to console with allowed user
And I go to Applications page
And I add an application
Then the application is added in the application page
Then the feature.ts file would look something like the following:
import {Selector} from 'testcafe';
import {LoginPageModel} from '../pagemodels/login.pagemodel';
import {ApplicationPageModel} from '../pagemodels/application.pagemodel';
let applicationPageModel: ApplicationPageModel = new ApplicationPageModel();
let loginPageModel: LoginPageModel = new LoginPageModel();
fixture(`Feature: Application
As a administrator
I want to be able to view and manage Applications in my account
`);
let scenarioImplementation = async t => {
let stepSuccess: boolean;
stepSuccess = await loginPageModel.goToPage(t);
await t.expect(stepSuccess).eql(true, 'Given I am in the login page');
stepSuccess = await loginPageModel.login(t);
await t.expect(stepSuccess).eql(true, 'When I login to console with
allowed user');
stepSuccess = await applicationPageModel.selectPage(t);
await t.expect(stepSuccess).eql(true, 'And I go to Applications page');
stepSuccess = await applicationPageModel.addApplication(t);
await t.expect(stepSuccess).eql(true, 'And I add an application');
stepSuccess = await applicationPageModel.verifyAddedApplication(t);
await t.expect(stepSuccess).eql(true, 'Then the application is added in
the application page');
};
test(`Scenario: Verify creating and deleting an application
Given I am in the login page
When I login to console with allowed user
And I go to Applications page
And I add an application
Then the application is added in the application page`,
scenarioImplementation);
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
I have a stack system that passes page tokens in the URL. As well my pages are dynamically created content so I have one php page to access the content with parameters.
index.php?grade=7&page=astronomy&pageno=2&token=foo1
I understand the search indexing goal to be The goal is to have only one link per unique set of data on your website.
Bing has a way to specify specific parameters to ignore.
Google it seems uses rel="canonical" but is it possible to use this to tell Google to ignore the token parameter? My URL (without tokens) can be anything like:
index.php?grade=5&page=astronomy&pageno=2
index.php?grade=6&page=math&pageno=1
index.php?grade=7&page=chemistry&page2=combustion&pageno=4
If there is not a solution for Google... Other possible solutions:
If I provide a site map for each base page, I can supply base URLs but any crawing of that page's links will crate tokens on resulting pages. Plus I would have to constantly recreate the site map to cover new pages (e.g. 25 posts per page, post 26 is on page 2).
One idea I've had is to identify bots on page load (I do this already) and disable all tokens for bots. Since (I'm presuming) bots don't use session data between pages anyway, the back buttons and editing features are useless. Is it feasible (or is it crazy) to write custom code for bots?
Thanks for your thoughts.
You can use the Google Webmaster Tools to tell Google to ignore certain URL parameters.
This is covered on the Google Webmaster Help page.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I have a product page in my website which I have added 3 years before.
Now the product production was stopped and the product page was removed from website.
What I did is I started displaying message in the product page telling that the production of the product got stopped.
when some one searches in google for that products the product page which was removed from site shows up first in google search.
The page rank for the product page is also high.
I don't want the removed product page to be shown at the top of search result.
What is the proper method to remove a page from website so that it gets depicted by what ever google have indexed in its table.
Thanks for the reply
Delete It
The proper way to remove a page from a site is to delete the actual file that is been returned to the user/bot when the page is requested. If the file is not on the webserver, any well configured webserver will return a 404 and the bot/spider will choose to remove that from the index in the next refresh.
Redirect It
If you want to keep the good "google juice" or SERP ranking the page has, probably due to any inbound links from external sites, you'd be best to set your websever to do a 302 redirect to a similar (updated product).
Keep and convert
However, if the page is doing so well that it ranks #1 for searches to the entire site, you need to use this to your advantage. Leave the bulk of the copy on the page the same, but highlight to the viewer that the product no longer exists and provide some helpful options to the user instead: tell them about a newer, better product, tell them why it's no longer available, tell them where they can go to get support if they already have the discontinued product.
I am completely agree with above suggestion and want to add just one point.
If you want to remove that page from Google Search Result; just login to Google webmaster tool (you must have verified that website in Google webmaster tool) and add that particular page for index removal request.
Google will de-index that page and it will be removed from Google search rankings.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 4 years ago.
Improve this question
I set up a site with a template and the title was something they supplied as a default. When I searched for my site's title, it showed up in results, but it was with their default title. After changing it a couple days ago, my site still shows up with the default title instead of what I changed it to.
Is there any way I can force Google to update their information so the title I have now shows up in results instead of the default title?
This will refresh your website immediately:
From Web Master tools Menu -> Crawl -> Fetch as Google
Leave URL blank to fetch the homepage then click Fetch
Submit to Index button will appear beside the fetched result; click it then choose > URL and all linked pages > OK
Just wait, Google should normally revisit your site and update its informations. But if you are hurried, you can try the following steps :
Increase the crawl speed of your site in Google Webmaster Tools : http://support.google.com/webmasters/bin/answer.py?hl=en&answer=48620
Ping your website on service like http://pingomatic.com/
Submit if you have not yet or resubmit an updated sitemap of your website.
Fetching as Google works, as already suggested. However stage 2 should be - submit your sites to several large social bookmarking sites like digg, reddit, stumbleupon, etc etc. There are huge lists of these sites out there.
Google notices everything on these sites and it will speed up the re crawling process. You can keep track of when Google last cached your site (There is a big difference between crawling and caching) by going to.. cache:sitename.com
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
I'd like to gather user data in a web-based intranet application similar to rapportive, gist and xobni. These services gather and display facebook profiles, twitter streams, linkedin profiles, etc. based on the user's email address in your inbox.
Is there a 3rd party library or API (free or paid) that would provide this kind of data from social networks based on a person's email address? I'd rather not have to go through and create calls for all these different services, not to mention maintaining all these APIs. If there is already a service out there that does the work of search and aggregation that would be so useful.
The application is in C#, so a C# library or wrapper for the API would be nice but definitely not a requirement.
Rapleaf... that is the magical service that you can use to find social media data. They have both batch and API REST services.
Yes Rapleaf appears to be the way to go. But applying for the API key appears to take some time; they may or may not grant an API key. any experience with this?
If you're a somewhat industrious coder and you only have a select few social networks to search, it isn't so difficult to write simple library functions to access their APIs yourself. They all have different limits on how much searching you are allowed to do, so keep the idea of throttling in mind.
Here's an example of a php function I use to search for people Batchbook by email address, you should be able to modify this to get data from most typical REST APIs:
function getBatchbookContacts($orgname, $apikey, $emailSearch, $page) {
$url = "https://{$orgname}.batchbook.com/api/v1/people.json?auth_token={$apikey}&email={$emailSearch}&page={$page}";
$request = new HttpRequest($url, HttpRequest::METH_GET);
$request -> setContentType('application/json');
try {
$response = $request -> send();
if ($response -> getResponseCode() == '201' || $response -> getResponseCode() == '200') {
$result_json = $response->getBody();
return $result_json;
} else {
throw new Exception($response -> getBody(), $response -> getResponseCode());
}
} catch (HttpException $ex) {
throw new Exception('Internal Server Error: ' . $ex -> getMessage(), 500);
}
}