What are the Alternatives to Google Analytics [closed] - seo

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 2 years ago.
Improve this question
I need to Track Unique Visitor count in my web application. I would really like to use Google Analytics but due to the Load limitations that google imposes I will not be able to use them. I am expecting WAY over 10,000 requests a day. This is the limitation that Google web analytics API imposes. Is there another company that has the same features as google analytics that is paid or free?

There definitely are.
Here are two open source and free solutions that are very polished:
Piwik - Designed as a direct competitor to Google Analytics (it looks just as nice) that you host on your own servers
Open Web Analytics

the 10,000 request apply to the Data API, not to the actual data collection.
Like you can have an unlimited number of users seeing your website. On the other hand if you use the API to extract data from their database, you can do 10k request a day only.
check this link for more details

The biggest, most obvious, most usual alternative is to simply do it yourself. Your webserver needs to log requests for security etc. anyway, so it's not a big deal to run something like webalizer on those logs. You won't get the quick, easy access to advanced information like paths users take through the site, btu that can be determined if you care enough. You do gain one huge benefit though: privacy of your own data.

We use Omniture here but it'll cost you.

There is SpeedTrap, a java-based analytics package. Our company used it for years before they turned into cheap **ards and decided Google Analytics was more cost effective (because it was free). But that's a story for another night.

Related

centralized API documentation for microservices [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
My team and I are currently building multiple services in parallel. We have the benefit of building all the services from scratch. I would like the ability to automatically display all API endpoints, from all services, in one page/site. This would be helpful because (among other things):
I don't have to go to multiple documentation sites to see what are the available endpoints in my entire "system".
It'll be a good first step to determine if any of the services should be split, combined or simply refactored.
Some of our services are in Django and the rest-swagger module is a great help. But I don't see how I can combine rest-swagger documentation from multiple services into a single documentation page/site.
I'm currently looking through this site and anything related to the Netflix experience but could not find a solution to my problem. Maybe centralized documentation isn't a big deal with 600+ services at Netflix, but that's hard to believe.
Can anyone suggest a tool or method to have a combined API documentation for all services in a microservice architecture?
My ideal scenario of what happens when a service is changed:
I click on the link to see the list of endpoints in my system.
A teammate updates a service and also it's documentation.
I refresh the page I am currently and I see that change made from step #2.
With my exp, you have some paths.
http://readme.io/
Make a wiki with JIRA, Redmine.
In Github create a repo for exclusive docs.
Google Docs.
I don't know about any existing tool rather I'm just putting my thought on where to do it.
From what the OP describe, they are already building a micro services architecture using Netflix stack. There should be a repository to config the name (or URL) for each of the services and the 'config server' or 'service registry' will read from that. To me, that's the perfect place to put the reference to each of the micro-service's documentation under their own entries. This way you get the benefit of maintaining the documentation and code at same place, plus you could potentially also collect run time information like instance/connections count if you hook into the config/registry server.
Being in similar situation I am looking to adopt https://readthedocs.org/ with GIT backed.

Google+ API for reading "plus one" count [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
Is there any official Google API for getting the number of "plus one" actions for a given URL?
There is this common method which doesn't seem to be using an official API.
Are we allowed to use this API at all? Even if we are, it could stop working anytime, right?
The API endpoints in the official Google+ API do all require some activity ID and thus can't give you the "plus one" count for any given URL, can they?
This is not currently a feature of our API. You can request the feature by filing it in our Issue Tracker (https://code.google.com/p/google-plus-platform/issues/entry?template=Feature%20Request%20-%20REST%20API). We use these reports and the number of Stars they receive to track developer feedback.
Also, you are correct that any method call not officially supported and documented (https://developers.google.com/+/api/latest/) may change in the future as we continue to build out our APIs. Sometimes clever developers find cool ways of doing things, but we do not guarantee that unsupported features will not change.
For site analytics on your sites, you get activity on Google +1 and Share through Google analytics. You can even set up Custom Analytics to monitor this.
For the +1 widget, there is not an official API to access the counts and the APIs referenced should probably be avoided for the reasons you mention (it could stop working, may not be working correctly to begin with). I can think of a few reasons people shouldn't be able to programmatically pull analytics from any arbitrary site on the web - probably a part of the reason that this does not exist first party from Google. If you feel it's an important feature, please add or star a feature request in the issue tracker - you should add it to the widgets section.
For in-network activity on Google+ the Activities resource documents everything that is available to you when looking at content on Google+. You can, for example, +1 a post, then share the +1 and track the analytics on Google+ watching the activity if you're doing this, you should be able to use the list of "plusoners" to determine the count of people who have made +1s on a post. See it in action using the API Explorer here:
Activities for +GusClass

How does Google store the index? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Lately I have been reading about web crawling, indexing and serving. I have found some information on the Google Web Masters Tool - Google Basics about the process that Google does to crawl the Web and serve the searches.
What I am wondering is how they save all those indexs? I mean, that's a lot to store right? How do they do it?
Thanks
I'm answering myself because I found some interesting stuff that talks about Google index:
In Google Webmasters YouTube Channel, Matt Cutts give us some references about the architecture behind Google Index: Google Webmaster YouTube Channel
One of those references, and from my point of view a worth reading, is this one: The Anatomy of a Large-Scale Hypertextual Web Search Engine
This helped me to understand it better, and I hope it help you too!
They use a variety of different types of data stores depending on the type of information. Generally, they don't use SQL because it has too much overhead and isn't very compatible with large-scale distribution of information.
Google actually developed their own data store that they use for large read-mostly applications such as Google Earth and the search engine's cache. This supports distributing information over a very large number of computers with each piece of information stored on three or four different computers. This allows them to use cheap hardware -- if one computer fails, the others immediately begin restoring all the data it held to the appropriate number of copies

Is there a way to programmatically access Google's search engine results? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 2 months ago.
The community reviewed whether to reopen this question 2 months ago and left it closed:
Original close reason(s) were not resolved
Improve this question
Does google offer a way to programmatically see their search engine results for a certain query?
I want to build a tracking application so that a user can see what rank on the google results their website is for certain keywords.
EDIT: The behavior of the program would be: every day the program queries Google for the desired phrases, sees what position the user's websites are, and emails the users an update of their positions for their phrases.
I want to be sure to comply with Google's terms of service too.
After finding this question I have been researching as the other answers seem out of date.
The Google search API would be the obvious choice as quoted by other users however it is now been deprecated in favour of Custom Search API.
Although not obvious at first the Custom Search API does allow you to search the entire web. However the bad news is that the order of the results are not the same as a regular web search.
In conclusion it used to be possible however it is no longer. The new API (at a cost) will allow you to search the web to you will not be able to get the ranking back as required.
I know the question is Google specific, but it doesn't hurt to try out other search engines which might be more open to API integration.
Check out DuckDuckGo's API.
Try google custom search api. Get a developer API key from google and get a cx code for search engine. The procedure is given in my blog http://preciselyconcise.com/apis_and_installations/search_google_programmatically.php
Yes, Google provides a search API that you can use:
The Google AJAX Search API lets you
put Google Search in your web pages
with JavaScript. You can embed a
simple, dynamic search box and display
search results in your own web pages
or use the results in innovative,
programmatic ways. See the examples
below for inspiration.
Don't let the name fool you, this API can be used for more than just JavaScript on a webpage.

Web-based document sharing for small organizations [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
In the past I have used nexo.com to share documents with sales, marketing, PR, and technical people for a small startup. But I wonder if there is a better solution to allow different types of geo-graphically dispersed workers to get to a variety of uploaded documents. I don't want to have to build or host this myself, and free or cheap is always nice.
I read about Confluence, but it seems to be way more than what I need. I simply want access-controlled folders in the cloud.
I haven't used this myself just yet, but I've heard great things about it google docs
We use s3fm for that. It's a free solution but requires an Amazon S3 account. Since we have one for our hosting needs that was an obvious choice. But given Amazon S3 bottom pricing I think it might make sense to consider open one just for that.
Love Dropbox!!! I haven't used it for setting up a lot of group access, though.
Sounds like Google Sites would help you a lot. You can set up a network of distinct Web sites -- one for sales, another for marketing, another for PR -- and upload your files to them. You can determine who has access to each site as well as each page of content.
In case anyone else checks this Q:
Wound up using filesanywhere.com - has the exact features I was looking for.
We use a combination of:
Backpack
SVN
JungleDisk
Take a look at Dropbox.
Access control is somewhat limited, but it's been working out very well for me.
Unfortunately I'm in the middle of writing such an application for a client. The best thing I can recommend is taking an existing web based file manager and adding in the permission feature.
With a big freaking huge disclaimer that I work on this as my day job:
If you're looking for feedback on those documents Backboard gives you web-based viewing and collaboration with no software required.
there is a product called docpro.this allows you to set up various security levels,routing methods etc.Its a web based one you can use for geographically dispersed team members across the globe.But its not free,But cheap i think.
Check this link
http://www.omnexsystems.com/Faq/documentpro.html