Tracking downloads of PDF from external sites - pdf

I have a site which hosts several PDFs which are often linked to directly from other websites.
I want to be able to track (preferably with Google Analytics) whenever anyone lands on a PDF page from an external site.
It would also be great if I could track whenever an external site's page loads up my PDF using an <embed> tag with my PDF URL as the src attribute.
The usual _gaq.push (etc) method (eg, for this problem) doesn't work in this case because I can't modify the external sites.
Any help would be greatly appreciated. Thank you.

Related

How to access BigCommerce internalapi?

I am trying to download (backup) images that customers upload for products that take custom logos (these are typically JPG, PNG, PDF, etc.) These customer files are downloadable by clicking on a hyperlink in the BigCommerce admin page for the order in question. The link is not a link to the image path but instead, a link to a service that sends the file to the browser. In other words, you have to be authenticated into the admin site to download the file. The URL looks like this:
https://mystore.com/internalapi/v1/orders/383945/products/251438/attributes/561518/download
https://mystore.com/internalapi/v1/orders/{order id}/products/{lineItem id}/attributes/{option id}/download
These are easily constructed in the API itself for a given order. If I use the link in a browser tab while I'm logged into the admin site, the file downloads.
But what I am trying to write an app to automatically download all the files (there are thousands). When I try to use this URL in an app, I get a authentication error. I tried at first using my regular API credentials but then used the credentials to log into the admin site. Both give me an authentication error.
I could not find anything documented on this so-called "internalapi." Anyone ever try to use this "internal" API that is used by the admin site?
I believe authentication is cookie based for that internal API, but there could be problems with using our non-publicly documented internal APIs in production, i.e. we may make future updates that would be breaking changes.
Images attached to orders through a file upload option also get copied to WebDAV, in the dav/product_images/configured_products folder. Another way to do this could be to use a WebDAV client library like easywebdav to connect and download the files.

Create a copy of my website on Cpanel

I want to link my mobile variant website version to the desktop version i.e "view website in desktop". You click this button and it will take you to the desktop version only. Program I use is Xara Web Designer. CPanel is where the files are hosted.
They have advised I need to create two copies of my website to do this.
I can duplicate my website but how would I go about uploading the two copies to CPanel? Also I would need to remove the robots.txt file from one of them because I do not want google crawling the duplicate site (SEO Issues).
Would I need two domains? I want to keep just one.
Thanks in advance please let me know if you need clarification on anything.
There are better solutions with regards to SEO such as making your website 'mobile responsive'. But to answer your specific question, you are probably best to create a mobile website and set it up on a 'm.' subdomain (so you only need your 1 domain name). You can do this from cPanel under 'Subdomains'.
As for no-indexing the mobile website with robots.txt, the correct method is to use the 'canonical' tag. This is a good guide from Google on the subject: https://developers.google.com/webmasters/mobile-sites/mobile-seo/overview/select-config?hl=en

Display Bitstreams in Google Docs Viewer

I've been embedding Google Docs Viewer in my DSpace instance, an online digital repository, using an iframe.
This is my site link: http://202.78.89.123:8081/xmlui/handle/123456789/145
DSpace generates bitstream links to each item/pdf in the repository. When I click the preview link, an iframe appears in my page but doesnt load any document. But when I change the source of my iframe to a pdf path file, which is publicly accessible in the world and not a bitstream, the viewer loads the document.
I have done everything I can, from checking whether my web server is publicly accessible by google docs viewer. My web server is publicly accessible and I suspect Google Docs Viewer doesnt anymore support bitstreams?If that's true then how will I display a pdf file in an iframe within my page? Any idea?
Below is my page that says "Apologies. There is no Preview Available"
The reason why I cant display the bitstreams on my google docs viewer is that even though my IP is publicly accessible, my ports are private. What I did is, make the ports public by setting it on the router.

JavaFX WebVIew - PDF on popup window

I have posted some other smaller questions regarding the problem I describe below and got some feedback but now I will try to explain it in more depth hoping to get through the problem.
I built a desktop application using JavaFX 2.2 which uses a WebEngine to access a website built using Oracle ADF Pages. The application tracks the users actions on the pages and stores data to a database. All fine so far until the point where I need show a PDF file on a user click.
On the actual website the user clicks a button and a new popup window opens up that displays the PDF.
My problem is that due to the lack of PDF support in JavaFX I cannot display the pdf. The actual link to the PDF is dynamic and it doesn't have a .pdf at the end of it so I can't use the actual URL to send it to an external bowser or something to display it. Additionally the connection is secure so I can't open the URL with Chrome for example.
Possible solutions I thought about are to read the binary data of the response from WebView and create the PDF file locally and then open it using Adobe of Chrome or something. Is that possible at all?
Another solution I thought about (while I am writing this question) is maybe to open the URL which the users default browser but how can I go about sending the secure connection cookie from the application to the browser.
Is any of the above even possible? Am I missing something?
Any help, clues, links, ideas would be very much appreciated.
Thanks
I think the best way to do what you want is to download the PDF and display it locally.
Downloading using WebView sounds like it could work but I'm not familiar with the user experience. As an alternative try using curl or wget. You can pass in the authorization cookie to those tools and use them to download the file

How to create sitemap for website with different pages in mobile and desktop?

I have a website that has two different pages structure - one for mobile visitors, and one for desktop. That's why I have two sitemap files - one for the mobile and one for desktop.
I want to create a robots.txt file that will "tell" search engines bots to scan the mobile sitemap for mobile sites, and the desktop sitemap for desktop sites.
How can I do that?
I thought of creating a sitemap index file which will point to both of those site maps, and to add the following directive to the robots.txt file:
sitemap: [sitemap-index-location]
It this the right way?
I can not give you a certainty, but I believe the best practice is to inform the two sitemaps in robots.txt. In mobile sitemap you already have the markings <mobile:mobile/> is reporting that a mobile version.
Another interesting question is perhaps also create a sitemap index:
<sitemapindex xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">
<sitemap>
<loc>http://example.com/sitemap.desktop.xml</loc>
</sitemap>
<sitemap>
<loc>http://m.example.com/sitemap.mobile.xml</loc>
</sitemap>
</sitemapindex>
And your robots.txt will look like:
# Sitemap index
Sitemap: http://example.com/sitemap.xml
# Other sitemaps. I know it is already declared in the sitemap index, but I believe it will do no harm also set here
Sitemap: http://example.com/sitemap.desktop.xml
Sitemap: http://example.com/sitemap.mobile.xml
robots.txt does not tell the search engine which mobile end which is the end of your PC, and he can only declare a sitemap.This is sufficient Well, I think you can add a judge in the html page header, the pc end pc side web mobile end mobile page, this is not good?Site Map there is on the page there is a link to, then more to promote inclusion.
I think I will recommend you for the responsive website design.
With the help of a responsive web design technique, you can build the alter web pages using CSS3 media queries. Here, there is one HTML code for the page regardless of the device accessing it. But, its presentation changes through CSS media queries to specify as to which CSS rules apply to the browser for displaying the page.
With responsive website design, you can keep both the desktop and mobile content on a single URL. It is easier for your users to interact with, share, and link to and for Google’s algorithms to assign the indexing properties to your web content.
Besides, Google will crawl your content effectively and there won’t be any need to crawl a web page using a different Googlebot user agent.
You can simply define a single sitemap and put in robots.txt file. It will crawl both for your desktop and mobile content.
In addition to stating the files in robots.txt, you should log into Google Webmaster Tools and submit the sitemaps there. That will tell you
If the sitemap url you submitted is correct
That the sitemap file has the correct syntax
How many of the files in the sitemap have been crawled
How many of the files in the sitemap have been indexed