SharePoint 2010 - Web Analytics - Traffic works but Search does not - sharepoint-2010

I am having a problem with the Web Analytics in Sharepoint 2010.
When I go to the Web Analytics Reports - Summary from Central Admin, I see that the Traffic Category shows results for Total Number of Search Queries and the other metrics. However, the Search Category displays 0 results for Total Number of Search Queries metric.
My search is up and running, and returns results accurately, so I'm having a hard time figuring out why the traffic works correctly but the searching does not. Any help is greatly appreciated. Thanks.

I believe that the search information comes from the Usage and Processing service application. The database that this service corresponds to is - the WSS_Logging. I would investigate in that area and see if the WAP has the association and permissions.

Related

Automating web page population

I have data in a csv file & want to do the following with it:
Log into web site
Populate field of the page with the csv data
Navigate to next page
Input the rest of data
Click submit
Repeat for next line
I can do this using UiPath but it's an expensive option for a relatively simple use case.
Any one any suggestions on how do this using a different method?
Thanks,
EddieT
If you're looking for alternatives then you probably would want to investigate APIs or Webhooks. But that all depends on the access rights you have for that particular website.
Try messaging the Developers of the website you need as they might have this service already available.
UiPath may appear expensive but if you calculate the amount of time saved for this one process then you will see the money savings too.
If you can find a couple of other processes you want to automate then I'd highly recommend it.

Getting Search Server to ignore sharepoint document data, and speed up crawl times

Background:
I have a Sharepoint Foundation 2010 installation that is being used to store scanned images of paper documents, making an electronic version of paper file folders we keep for each of our company's Clients. All of the documents are all stored as PDF files.
The configuration includes a web-server housing Sharepoint and the Search Server 2010 Express service, as well as separate database server housing the content data as well as the search crawl store. Both the Sharepoint/Search box, and the SQL box are VMware VMs running on shared hosts (including a shared SAN) with our other production servers.
Each file added to sharepoint must be added through a custom interface, including metadata tags for client information (a site content type with a set of site columns defines this extra metadata). We then expose this client identifying data with the search server by setting Managed Properties so we can do queries against the search webservice specifying WHERE CustomClientID = X.
Our data currently resides in two large document libraries, one for each arm of the company.
After a few years of operation our server now has some 250,000 documents and we are having issues with full crawls (running weekly off hours) sometimes crashing part way through, and our incrementals (running every 5 min during work hours) take 7-8 minutes to pick up 2-3 new files.
Question:
I was wondering if there was a way to get the search server crawler to only pick up the metadata we are supplying and ignore the document contents entirely, which I assume would speed up the crawl process by orders of magnitude. I believe this feature is described as full text search, but have not been successful in finding anything that explains if this is something that can be turned off.
If not, is there an alternative option for speeding up crawl times that anyone would advise?

Sharepoint 2010 Subsite/Document Library/Lists Storage Data

Basically all I want to do is to get the total storage space of subsites, document libraries and lists within a site collection. There does not seem to be a possible way to do it besides:
using site.StorageManagementInformation which is currently Obsolete
SPSite.UsageInfo this can only work for site collection:
SPSite.UsageInfo usageInfo = spSite.Usage;
long storageUsed = usageInfo.Storage;
SPWeb.GetUsageData which only gets the current day(and up to the last 31 days) usage data.
finding the database table where Site Collection Administration > Storage Metrics is querying from (which I could not find even with using a .NET Reflector for the assembly)
If anyone has any other way or idea on how to achieve this it would be very much appreciated!
my suggestion is to try opening the site you are accessing via SharePoint Designer and right click any those collections then click properties. The size of the objects should be shown there as well. :)

Website data retrieval

An recent article has prompted me to pick up a project I have been working on for a while. I want to create a web service front end for a number of sites to allow automated completion of forms and data retrieval from the results, and other areas of the site. I have acheived a degree of success using Selenium and custom code however I am looking to extend this to a stage where adding additional sites is a trivial task (maybe one which doesn't require a developer even).
The Kapow web data server looks to achieve a lot of this however I am told it is quite expensive (currently awaiting a quote). Has anyone had experience with this, or can suggest any alternatives (Open Source ideally)?
Disclaimer: I realise the potential legality issues around automating data retrieval from 3rd party websites - this tool is designed to be used in a price comparison system and all of the websites integrated with it will be done with the express permission of the owners. Where the sites provide an API this will clearly be the favoured approach.
Thanks
Realised it's been a while since I posted this, however should anyone come across it, I have had lots of success in using the WSO2 framework (particularly the mashup server) for this. For data mining tasks I have also used a Java library that this wraps - webharvest - which has achieved everything I needed

Automating WebTrends analysis

Every week I access server logs processed by WebTrends (for about 7 profiles) and copy ad clickthrough and visitor information into Excel spreadsheets. A lot of it is just accessing certain sections and finding the right title and then copying the unique visitor information.
I tried using WebTrends' built-in query tool but that is really poorly done (only uses a drag-and-drop system instead of text-based) and it has a maximum number of parameters and maximum length of queries to query with. As far as I know, the tools in WebTrends are not suitable to my purpose of automating the entire web metrics gathering process.
I've gotten access to the raw server logs, but it seems redundant to parse that given that they are already being processed by WebTrends.
To me it seems very scriptable, but how would I go about doing that? Is screen-scraping an option?
I use ODBC for querying metrics and numbers out of webtrends. We even fill a scorecard with all key performance metrics..
Its in German, but maybe the idea helps you: http://www.web-scorecard.net/
Michael
Which version of WebTrends are you using? Unless this is a very old install, there should be options to schedule these reports to be emailed to you, and also to bookmark queries. Let me know which version it is and I can make some recommendations.