I have created a google+ page for my website.
I want to verify my website link.As per the instructions, I have downloaded the verification file, uploaded it to the domain & trying to verify the link.
But when I do that, I get an error
<html>
<head>
<title> Title of the Website </title>
</head>
<frameset border="0" rows="100%,*" cols="100%" frameborder="no">
<frame name="TopFrame" scrolling="yes" noresize src="http://abc.co.in/google366be22c1c1da379.html">
<frame name="BottomFrame" scrolling="no" noresize><noframes></noframes> </frameset>
</html>
I have the Website link as which I am trying to verify is:-
http://www.maindomain.in
Which is redirected to another domain to
http://www.abc.co.in
If it the usual hosting, then such error would have not come but in the current scenario how do I verify the website link.
Related
I have a page setup for Open Graph Protocol because our app is built upon Angular 1.x now when we share a URL using LinkedIn. Share Popup opens but it does not crawl open graph tags sometimes and sometimes it shows the proper crawl tags it was working fine till last week. here is the image which shows the preview area:
Scenario for sharing a link:
User comes on our site: www.example.com/event/[EVENT_ID] and clicks share to LinkedIn.
Popups opens using: https://www.linkedin.com/shareArticle?mini=true&url=https://example.com/event/0u83s43rf6r/4295028179 where 4295028179 is event id and 0u83s43rf6r is a random key for sharing because of cache busting.
Now we are using apache mod_rewrite to redirect LinkedIn, Facebook, Twitter bot to our crawler page where Open graph tags are rendered.
Apache Mod Rewrite Settings in .htaccess file
RewriteCond %{HTTP_USER_AGENT} ^(facebookexternalhit/(.*)|Facebot|Twitter(.*)|Pinterest|LinkedIn(.*)|LinkedInBot)$ [NC]
RewriteRule ^(event)/([_0-9a-zA-Z]+)/([0-9]+)$ https://share.example.com/web/crawler/details/$3 [R=301,L]
So the end url becomes when crawler redirect based on USER AGENT where open graph tags are rendered: http://share.example.com/web/crwaler/details/4295028179
Here is the rendered html tags:
<html>
<head>
<script type="text/javascript">window.location = 'https://example.com/event/236129271' // if it's a browser then redirect it to website</script>
<meta property="og:title" content="Event Title" />
<meta property="og:description" content="Event Description" />
<meta property="og:image" content="Event Thumbnail" />
<meta name="title" content="LinkedIn Share Test" />
<meta name="description" content="Event Description" />
<meta property="og:image:width" content="188" />
<meta property="og:image:height" content="71" />
<!-- Twitter Card Working Fine-->
<meta name="twitter:card" content="summary_large_image">
<meta name="twitter:title" content="Event Title">
<meta name="twitter:description" content="Event Description">
<meta name="twitter:image" content="Event Image">
</head>
<body>
</body>
</html>
Last week this logic is working fine on Linkedin but now somehow it's not working.
Your code seems fine, you have the right og: tags, etc..
Whenever you're not sure that you're using the LinkedIn share API, check out your website with the LinkedIn Post Inspector, and this will tell you how the LinkedIn API is looking at your webpage. It covers many things, from <title> tags, to og: tags, to oEmbed tags, etc., etc..
Worried about caching? Why not test a URL like example.com?someFakeParameter=123? This will similarly bypass the caching at the LinkedIn Post Inspector.
If you could post your actual URL that you're sharing, I could give you a better answer, but hopefully something here helps!
Let's say i have a simple blog engine. I've posted a simple post with URL
http://example.org/blog/awesomr-post
Few days later i've noticed the typo and fix my URL
http://example.org/blog/awesome-post
But search engines have already indexed "awesomr-post" and if somebody follow this link he'll get 404 error. There is the same issue with bookmarked pages.
So i think the post should be accepted by two links
http://example.org/blog/awesome-post
http://example.org/permalinks/1
Now i have to specify relationships somehow. What i can do
http://example.org/permalinks/1
<!DOCTYPE html>
<html>
<head>
<link rel="canonical" href="http://example.org/blog/awesome-post">
</head>
<body>
page content
</body>
</html>
http://example.org/blog/awesome-post
<!DOCTYPE html>
<html>
<head>
<link rel="bookmark" href="http://example.org/permalinks/1">
</head>
<body>
page content
</body>
</html>
Is it right solution? And should i use the canonical or permalink URL when linking from another site pages?
One of the way is to have 301 (permanent) redirect from http://example.org/blog/awesomr-post to http://example.org/blog/awesome-post
I have one html page open inside UiWebViewController with cordova. While index.html loading inside the Uiwebviewcontroller can we sniff the requests that is originating from index.html?
for example I have following html that is getting opened in UiWebviewcontroller:
<html>
<head>
<link rel="stylesheet" type="text/css" href="theme.css">
<script src="app.js"></script>
</head>
<body>
<img src="img.jpg"/>
</body>
</html>
Can I sniff and modify the url that is getting requested inside Uiwebviewcontroller ie. img.jpg,theme.css,app.js to something like content/img.jpg, css/theme.css, js/app.js using Objective-C.
Yes, that’s possible using NSURLProtocol, see this blog post by NSHipster and this related Stack Overflow thread.
In my page, I have this:
<meta property="og:image" content="<?php echo $picURL; ?>"/>
Which when executed is rendered like this:
<meta property="og:image" content="http://a3.sphotos.ak.fbcdn.net/hphotos-ak-ash3/556898_400257580012798_100000856787624_1059515_311974781_n.jpg"/>
But the Facebook scraper is seeing it like this:
<meta property="og:image" content="">
It seems it is not considering images from Facebook.
See my answer here but you can't have images hosted on Facebook set as your meta tags. It used to give you a clearer error message that you couldn't hotlink Facebook images.
This may be a dumb newbie question, so appologies for that.
My website is using a SSL certificate. I also include the W3 validator link in each of my webpages as follows:
<img src="valid-xhtml1.png" alt="Valid XHTML 1.0 Strict" height="31" width="88" />
(Note: copied over the w3 validator image so SSL wouldn't complain about unsecure resources).
When I do this, and click on the image to validate the page, I get this message from the validator:
The error mentions requesting the validator unsecurely. So I tried changing the href of the <a> tag to use https for the validator, but then the page simply doesn't load (I guess because the validator doesn't use SSL).
Does anyone know a way around this? I am guessing there is not a way to use the code as is, but maybe there is a way to update uri=referer to be uri=https://mysite.com/...? Is there a way to dynamically grab the URL of the current page?
Also, just for further reference, does SSL simply prevent the referer request header from being accessed?
Oh, and I know I can just go to my website using http instead of https, and the validator works. But I'd rather get it configured to work with https too.
As for the "validate icon" question:
This would usually lead to displaying a messages about "unsecure items" (=mixed http+https content)... the validate icon is not officially supported in such constellation... a partial workaround is described here.
IF you want to grab the uri dynamically I suspect you will have to use JavaScript for that and then create/add the <a> in the DOM...
As for the SSL/Referer question:
The standard says that a client (=browser) should send referer only if the destination is secure - so yes, in mixed cases the referer won't get sent to the non-secure URL.
Ok, so it's not looking like there is a way to do this with just HTML. So instead, I decided to use JavaScript to handle the issue.
I removed the <a> tag from around the W3 logo and added an onclick JavaScript function validatePage(). So here is basically a template for an XHTML Strict page that still allows you to include the validation icon.
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="Content-Type" content="text/html;charset=utf-8" />
<title>Title of document</title>
<script type="text/javascript">
function validatePage() {
var validatorUrl = "http://validator.w3.org/check?uri=http" + (document.URL).substring(5);
window.open(validatorUrl);
}
</script>
</head>
<body>
<h1>Test Template Page</h1>
<p><img src="valid-xhtml1.png" alt="Valid XHTML 1.0 Strict" height="31" width="88" onclick="validatePage()" /></p>
</body>
</html>
Notice how the validatorUrl variable trims off the "https" from the URL and instead uses "http". So I just circumvented using the HTTP referer header.
Hope this helps someone else.