I have recently placed an ad in a weekly publication that sends out a PDF file. My ad is directly linked so that the reader can click on it and go to my website. The PDF file is hosted on a different server, but is, in fact, a PDF file that has to be downloaded and viewed on that site, not emailed or shared that way. I have Google Analytics and a couple other stats tracking programs installed and I can't see the referring URL from this other site at all, in anything. Is there something I can ask the designer of the PDF file to include in her links to make them trackable? Or is this simply not possible?
Use Google Analytics Campaign Tagging.
This tool will help set it up. You'll want to classify the variables such that the source and the medium are set, at minimum.
http://www.google.com/support/analytics/bin/answer.py?hl=en&answer=55578
So, for example, if your URL is http://example.com, you could set the parameters as such:
utm_source: BlahNews
utm_medium: newsletter
utm_campaign: july10issue
Your resulting URL would be http://example.com/?utm_source=BlahNews&utm_medium=newsletter&utm_campaign=july10issue
Google Analytics would track these hits under that Campaign, Source and medium.
If the URL is displayed raw, and want to avoid 'displaying' an ugly URL, you could setup an internal redirect to that URL, and it looks like you're using WordPress, there are a few free plugins that manage redirects like this (I happen to like 'Redirection')
So, you could tell the plugin to redirect
http://example.com/blahnews TO http://example.com/?utm_source=BlahNews&utm_medium=newsletter&utm_campaign=july10issue
Can you ask them to put some token in the query string of the URL to the site?
Related
Why i am able to google messages in (for example) gitter.im? How did google indexed all this: https://gitter.im/neoclide/coc.nvim?at=5ea00cdda3612210839689f1 ?
Does gitter.im return its content to google in another format or via some specific interface/protocol declared in special section for web crawlers somewhere? Did google spent some resources on development to build a gitter.im-specific crawler that is able to do specific XHR-requests?
Simple:
Google ask https://gitter.im/gitter/developers
There is N recent messages embedded in HTML already, say 50. Then google just extract all the links from the HTML (from that time-tag "18:15", for example). Each time-tag gives you url of form https://gitter.im/gitter/developers?at=610011abc9f8852a970e808e and google doesnt care why. Just remember urls.
Google asks that grabbed 50 urls of form https://gitter.im/gitter/developers?at=610011abc9f8852a970e808e
Each such URL gives you ~50 messages around that exact message. So search engine think: "ok, this URL gives you THIS text".
So when you search THIS test it just gives you the url closer-to that text or maybe just any url with that text...
In the top right social media widget on this page
there is a link to a pdf file titled CURRENT ROAD REPORT.
This pdf is updated hourly 24 hours a day and it keeps the same URL.
Is there a way to stop browsers caching this download link?
Just add a dynamic get-parameter (e.g. a timestamp) to the link to bypass any cache. In PHP this could be done like this:
link
If you have access to the server that delivers the file you could also define the specific cache information through the webserver.
I am looking for a web service that will allow me to upload a PDF and can track the number of times it is downloaded regardless of the source. I am aware of Google Analytics event tracking on my site but the issue here is that I need to give the file path to a number of partner sites and would like a centralized place to view total downloads among all partners. A breakdown of downloads by source would be awesome but not necessary. I can't rely on getting numbers from all of the partners as some may not even have GA set up at all.
Does something like this exist? Free is nice but would be willing to pay for an account if necessary.
Thanks.
Ended up using bit.ly to to shorten the path to the PDF hosted on my server. Gave the shortened url to the partners. Bit.ly provides good click stats by simply adding a "+" to end of the shortened url so we could see results.
Have you tried Ge.tt ?
I believe it shows number of times your files has been downloaded.
i have business listing site (www.brate.com) where people can search for local businesses and rate them.
the entire site is build using GWT (i.e. Ajax) and the all content is generated dynamically. Now i am in a phase where i want the site to be SEO friendly, below is my approach and please advise me if its the best way to implement it.
1- create static HTML snapshot of each business and its related data (site, address, phone number, user reviews...etc) and put all the generated HTML files under a single directory
2- create a sitemap xml file that contains all the above HTML links
3- configure webmaster to crawl and index all generated HTML snapshots
now my logic is that when google search query list one of the above generated html files in its search results i want to redirect the user to the site main page (www.brate.com) not the html snapshot.
can i use a redirect like "" in the generated snapshots?
if not what is the best way to achieve the above mentioned logic?
Thanks
Sameeh, one suggested approach for GWT
Ensure that you have correctly handled history tokens for all your pages in GWT. Let the tokens start with exclamation (!).
Associate GWT history tokens with generated pages using #! notation
Let tokens be keyword rich as we do for any URL optimization in SEO
Read through https://developers.google.com/webmasters/ajax-crawling/ for understanding #! notation.
Details on support by Bing: http://searchengineland.com/bing-now-supports-googles-crawlable-ajax-standard-84149
We have a website that requires a username and password. Once logged in, the user can select a link to a PDF in the web browser. Once this has happened they are able to see the full URL path of the PDF, they could copy and paste the path into a different browser without logging in, or send the address to someone else to look at.
I am asking this for a co-worker so I am not too sure on what is needed, but they want to change it from say "documents/customerlist.pdf" to "documents/info.asp" (not sure what the file type should be, maybe just "documents/info"?) I think that is what the goal is. Is this possible? If someone could point me in the right direction we might be able to figure it out!
I should think you can do this in ASP. You'll need to deliver the PDF dynamically via an ASP page, which detects the user's session and only serves the data if they are suitably authenticated (so copying the URL to a different browser/machine will result in a 404 or access denied, as you wish). You'll need to read the data from file and binary-write it to the browser, and set HTTP headers for mime-type, content length etc.
I'd start off with serving it on a pdf.asp?file=customerlist URL, but you can later experiment with changing this to something more readable (docs/customerlist.php). You'll need to look into URL rewriting here.
So, that's the general approach. If you do a web-search around these topics ("ASP serve binary file", "ASP URL rewriting") you are sure to get plenty of examples.