Google Analytics - Tracking PDFs in one site, but not the other? - pdf

so I have a bit of a dilemma here. I am pretty familiar with how GA tracks PDF either via adding tagging to each PDF, or by using Tag Manager...
I have two separate sites, with two unique UA-xxxxx-1 IDs...
One site, I can query back about 2 years, and see ".pdf" files as pageviews...
On another site, I try to query the same file extension and there is nothing, and i know for a fact PDFs are being downloaded via link clicks.
Why would one site show this and another is not?
They both utilize the same Universal script, I have not installed tag manager on these sites, but one shows PDFs and one does not?
Could there be something in the Admin section of the two sites in GA I need to look at?
Could something have been configured in one site, and not the other?
Am a bit confused by this...

Related

SEO: how can dynamic URL with query strings be searched by search engine bots?

I’m developing an ecommerce web site in ASP.NET using SQL server 2008 database.
Most of my pages are database driven and all the content is gathered from a SQL Server.
Every product page is created dynamically from data coming from the database, hence every product’s page URL has a unique query string, containing a “product_id” variable.
*Example: http://www.myecommence.com/products.aspx?product_id=1*
I'd like to improve my Search Engine Optimization.
Dealing with a small number of products could be fine but what if I
had more than 1000 products, how could every product be crawled?
How does the google spider/bot know that a product_id with a
hypothetical number of 767 exists?
I’ve been googleing this, still I can’t understand how pages that
have absolutely no reference in the site or external sites can be
crawled? If this is possible the spider should know how to read the
website’s database tables, but I guess that this is not the case.
At this point since most of the pages and links are dynamic how
could they be indexed, the same thing applies to “user detail” pages
that are accessed via query string using a “user id=n”?
Probably what I’m asking has already been discussed but still I don’t have clear some points.
I would advise using Mod Rewrite rules to make your URLs search engine friendly.
This is very important for Google.
As is a good category structure.
Eg:
domain.com/t-shirts/girls/star-wars-t-shirt/
is far better than
domain.com/products.aspx?product_id=1*
Here is some info:
http://msdn.microsoft.com/en-us/library/ms972974.aspx
http://www.wrox.com/WileyCDA/Section/id-305997.html
To answer your questions:
Dealing with a small number of products could be fine but what if I had more than 1000 products, how could every product be crawled?
If you have a good sitemap / menu structure etc, it is likely that Google will crawl all your pages.
How does the google spider/bot know that a product_id with a hypothetical number of 767 exists?
Via crawling your site, via your sitemap, via the menu system on the site etc. However always remember: Google is not psychic - it cannot find a page unless you tell how to / link to it.
I’ve been googleing this, still I can’t understand how pages that have absolutely no reference in the site or external sites can be crawled? If this is possible the spider should know how to read the website’s database tables, but I guess that this is not the case.
If you have no reference - you are doing something wrong. Improve your site structure.
At this point since most of the pages and links are dynamic how could they be indexed, the same thing applies to “user detail” pages that are accessed via query string using a “user id=n”?
Nothing wrong with a dynamic URL per-se - but again I would recommend implementing search engine friendly URLs via Mod Rewrite or similar - see the above resources.
Good luck,
Colin
Modern systems optimize for SEO by allowing for either custom or automated URLs that remap to your id based url pattern. This URL style allows for a fully custom word for word product title or keyword/description, which carries more weight than a random id number in a URL.
To ensure all individual pages are indexed, you generally benefit most from submitting or making available a sitemap xml. More info from google on generating one here:
https://code.google.com/p/googlesitemapgenerator/
Hope that gets you going in the right direction!

Google displaying website title differently in search results

Google displays my website’s page title differently to how it is meant to be.
The page title should be:
Graphic Designer Brighton and Lewes | Lewis Wallis Graphic Design
It displays fine in Bing, Yahoo and on my actual website.
However, Google displays it differently:
Lewis Wallis Graphic Design: Graphic Designer Brighton and Lewes
This is annoying as I want my keywords "graphic designer brighton" to go before my name.
I am using the Yoast SEO plugin and my only suspicion is that there might be a conflict between that and my theme, Workality.
Has anyone got any suggestions as to why this might be happening?
Google Search may change webpage titles they show in the result page (since 2012-01):
We use many signals to decide which title to show to users, primarily the <title> tag if the webmaster specified one. But for some pages, a single title might not be the best one to show for all queries, and so we have algorithms that generate alternative titles to make it easier for our users to recognize relevant pages.
See also the documentation at http://support.google.com/webmasters/bin/answer.py?hl=en&answer=35624:
Google's generation of page titles and descriptions (or "snippets") is completely automated and takes into account both the content of a page as well as references to it that appear on the web. The goal of the snippet and title is to best represent and describe each result and explain how it relates to the user's query.
[…]
While we can't manually change titles or snippets for individual sites, we're always working to make them as relevant as possible.
In my answer on Webmasters SE I linked to questions from people having the same issue.
Is is possible that you changed the title, or installed the plugin, and Google hasn't picked up the changes yet?
It can take a few weeks for Google to pick up changes to your site, depending on how often it spiders it. The HTML looks fine so I can only think that Google hasn't got round to picking up the changes yet.

Author data not being recognized in Google structured data testing tool

I've searched all over the place and I can't figure out what I'm doing wrong. No matter what I still get a Page does not contain authorship markup on the structured data testing tool
I have two sites with almost identical pages. The rel=author tags are inserted the same way.
Here is an example of one page that works: http://bit.ly/18odGef
Here is an example of one page that doesn't: http://bit.ly/12vXdAm
I tried adding ?rel=author to the end of the Google+ profile URL, which doesn't seem to work on either site. I am not blocking anything via nofollow or robots.txt. The tool is not being blocked by a firewall or anything. Can anyone see what I'm doing wrong here and why it works for one site, but not the other?
FYI, the site that does not work used to work without a problem. I hadn't changed anything with how the author markup was organized until I realized it wasn't working anymore.
When I test both of those pages in Google's structured data test tool, it shows that authorship is working correctly for both pages.
Here are the results for the page you said was working: https://www.google.com/webmasters/tools/richsnippets?q=http%3A%2F%2Fnikonites.com%2Fd5100%2F2507-d5100-vs-d90.html%23axzz2rFFm1eVv
Here are the results for the page you said wasn't working: https://www.google.com/webmasters/tools/richsnippets?q=http%3A%2F%2Fcellphoneforums.net%2Fsamsung-galaxy%2Ft359099-enable-auto-correct-galaxy-note-ii.html%23axzz2rFFlwz3W

File hosting to track total downloads of a PDF

I am looking for a web service that will allow me to upload a PDF and can track the number of times it is downloaded regardless of the source. I am aware of Google Analytics event tracking on my site but the issue here is that I need to give the file path to a number of partner sites and would like a centralized place to view total downloads among all partners. A breakdown of downloads by source would be awesome but not necessary. I can't rely on getting numbers from all of the partners as some may not even have GA set up at all.
Does something like this exist? Free is nice but would be willing to pay for an account if necessary.
Thanks.
Ended up using bit.ly to to shorten the path to the PDF hosted on my server. Gave the shortened url to the partners. Bit.ly provides good click stats by simply adding a "+" to end of the shortened url so we could see results.
Have you tried Ge.tt ?
I believe it shows number of times your files has been downloaded.

SharePoint 2010: what's the recommended way to store news?

To store news for a news site, what's a good recommendation?
So far, I'm opting for creating a News Site, mainly because: I get some web parts for free (RSS, "week in pictures"), workflows in place and authoring experience in SharePoint seeems reasonable.
On the other hand, I see for example that, by just creating a Document Library, I can store Word documents based on "Newsletter" template and saved as web page and they look great, and the authoring experience in Word is better than that on SharePoint.
And what about just creating a blog site!
Anyway, what would people do? Am I missing a crucial factor here for one or the other? What's a good trade-off here?
Thanks!
From my experience, the best option would be to
Create a new News Site
Create a custom content type having properties like Region (Choice), Category (Choice), Show on homepage (Boolean) , Summary (Note) etc.
Create a custom page layout attached to above content type. Give it a look and feel you want your news article to look like.
Attach the page layout as default content type to Pages Library of News site.
The advantages of this approach is that you can use CQWP web part on the home page to show latest 5 articles. You can also show a one liner or a picture if you also make it a property in custom content type.
By Storing News in a word document, you are not really using SharePoint as Publishing Environment but only as repository. Choice is yours.
D. All of the above
SharePoint gives you a lot of options because there is no one sized solution that works for everyone. The flexibility of options is not to overwhelm you with choices, but rather to allow you to focus on your process, either how it exists now or how you want it to be, and then select the option that best fits your process.
My company's intranet is a team site and news is placed into an Announcements list. We do not need any flashy. The plain text just needs to be communicated to the employees. On the other hand, our public internet site is a publishing site, which gives our news pages a more finished touch in terms of styling and images. It also allows us to take advantage of scheduling, content roll-up, friendly URLs along with the security of locking down the view forms. Authoring and publishing such a page is more involved than the Announcements list, but each option perfectly fits what we want to accomplish in each environment.
Without knowing more about your needs or process, based only on your highlighting Word as the preferred authoring tool, I would recommend a Blog. It is not as fully featured as a publishing site, but there is some overlap. And posts can be authored in Word.
In the end, if you can list what you want to accomplish, how you want to accomplish it, and pick the closest option (News Site, Team Site, Publishing Site, Blog, Wiki, etc), then you will have made the correct choice.
I tend to use news publishing sites, for what you said and page editing features.
It also allows you to set scheduled go-live and un-publish dates which is kind of critical for news items.