I'm trying to relocate a few select posts from my blogger URL to my new blog located in a Wix website.
I'm trying to use the meta refresh tag to get my SEO transfered for each of my blogger posts.
Blogger does not provide 301 redirects outside of the blogger domain. Hence I'm using the meta refresh tags.
I notice that Wix's blog pages have Ajax based URL links. Should I be providing the URL (of the Wix post) in the Meta Refresh tag (in the blogger post) with the "#!" or should the URL in the meta refresh be the one with "?_escaped_fragment_"?
Which of these URLs will transfer the SEO from the blogger post to the Wix post?
If you intend to preserve the link profile and search engine optimisation value of the posts, then a Meta refresh cannot quite replace a 301 redirect.
To answer your question, though, Google can deal with hashbang (#!) as well as escaped fragments, depending on how the Wix site is coded. You should definitely refer to Google's guide to making AJAX crawlable:
https://developers.google.com/webmasters/ajax-crawling/docs/learn-more
Use the following code in head tag:
<noscript>
<meta http-equiv="Refresh" content="3;url=yourpage.html">
</noscript>
Google can understand #! sign. That would not be a problem.
If you query site:www.[something-made-with-wix].com on Google, You'll see all the links in the form of #! in the results.
You can try this one as an example.
After many trial and error I have found the answer to my own question.
Here's what happened when I did this on the old/url
<meta http-equiv="Refresh" content="2; URL=new/url/#!BlogPost" />
This did the redirection after 2sec, but after weeks of waiting, the old/url continued to show on google and the new/url never showed up.
Then I tried this on the old/url:
<meta http-equiv="Refresh" content="2; URL=new/url/?_escaped_fragment_=BlogPost" />
This did nothing as well.
Then I figured that if content=n (n is a number other than 0) , this is treated as a 302 redirect. Which is a temporary redirect.
So I tried the following:
<meta http-equiv="Refresh" content="0; URL=new/url/?_escaped_fragment_=BlogPost" />
This was a weird reaction that google gave. The old/url got removed from the search results and the new/url too was nowhere to be found. This is bad, never do this.
The final option was:
<meta http-equiv="Refresh" content="0; URL=new/url/#!=BlogPost" />
This finally did the trick. The link juice passed on from the old/url to the new/url after a few days. It is important however to go to google webmaster and get the old/url re-crawled. Only then will the link juice be passed on.
Please can you look into this, it may be useful for you:
<html xmlns="http://www.w3.org/1999/xhtml">
<head><title>
Welcome Back
title>
<meta http-equiv="Refresh" content="2; URL=/wwstore/Profile.aspx" />
head>
You can add this into an ASP.NET page with code like this:
// *** Create META tag and add to header controls
HtmlMeta RedirectMetaTag = new HtmlMeta();
RedirectMetaTag.HttpEquiv = "Refresh";
RedirectMetaTag.Content = string.Format("{0}; URL={1}", this.Context.Items["ErrorMessage_Timeout"], NewUrl);
this.Header.Controls.Add(RedirectMetaTag);
But I never put 2 and 2 together to realize that the meta tag is actually mapping an HTTP header. A much easier way to do this is to simply add a header:
Response.AppendHeader("Refresh", "4");
Or refresh and go off to another page:
Response.AppendHeader("Refresh", "4; url=profile.aspx");
For more details please look here : http://weblog.west-wind.com/posts/2006/Aug/04/No-more-Meta-Refresh-Tags
Related
I have a dynamic page, where the contents and title will change based on the parameters in the URL. I want the same to be done for meta tag description. As I don't have a sound knowledge of SEO, I don't know whether it will be valid or not.
Say suppose URL contains word "test"
I will do,
if("test" is present)
{
<title>test</test>
<meta decription="test"/>
}
else
{
<title>test1</test>
<meta decription="test1"/>
}
Can I do this? Does giving two meta tag descriptions for same page work.
It is best practice to have different, on the page content based values of the title element and the meta description for each web page. It is not forbidden by the the HTML5 specification to have multiple <meta name="description" content="YOUR DESCRIPTION"> elements but I would guess that search engines process only the first appearance of the element. So my recommendation would be use one <meta name="description" content="YOUR DESCRIPTION"> element for each page.
As long as you code it server-side (eg in PHP) when the page is generated rather than client-side (javascript) after the page has loaded, then it will be fine. That's how most CMS systems work already.
Done server-side, only one of the description tags will actually appear in the code Google see.
Done client-side, it is likely that they will see no description at all as I don't think many search engines render javascript.
Please suggest me, By writing <meta name="robots" content="nofollow">in the submaster page will include the links of master page or not? Thanks.
example.com/master-page/sub-master-page
AND
example.com/master-page
These both are two different URLs therefore no-following links on one page will not effect the links on the other page.
You will have to include the no-follow meta tags on both the pages separately to make external links no-follow on both the pages:
<meta name="robots" content="nofollow"/>
Every page identified by a unique URL is unique and crawlers index each URL separately. Considering this fact and logic, your meta tag on sub page will not affect the parent page.
I am trying to figure out if it's possible to pass in more than a URL to share when using the LinkedIn JS API.
My code is:
IN.UI.Share().params({
url: 'http://www.example.com'
}).place.();
Now I have tried to pass in other params like:
IN.UI.Share().params({
url: 'http://www.example.com',
title: 'A Title',
summary: 'A Small summary'
}).place.();
But that did wot work. It seems to just ignore those extra params.
I know I can do it using the custom share functionality:
http://www.linkedin.com/shareArticle?mini=true&url={articleUrl}&title={articleTitle}&summary={articleSummary}&source={articleSource}
But I want to use the JS API so I can get back a token to verify if it was posted properly. With the shareArticle way it takes about 20-30 seconds to actually verify if it was shared using this: (https://developer.linkedin.com/retrieving-share-counts-custom-buttons).
Unfortunately there is no way to do this. The Linkdin Javascript API and Linkdin Share button relies completely on meta tags to scrape information. Such a Pity.
Just set the og: property tags on the page that you are sharing, that way LinkedIn knows that the title, image, etc., fields, are all actually appropriate and right for the site. You can set them like so...
<meta property='og:title' content='Title of the article"/>
<meta property='og:image' content='//media.example.com/ 1234567.jpg"/>
<meta property='og:description' content='Description that will show in the preview"/>
<meta property='og:url' content='//www.example.com/URL of the article" />
Source: LinkedIn Developer Docs: Making Your Website Shareable on LinkedIn.
Works for my site!
You can always use the LinkedIn Poster Inspector on your site's URL to make sure you did it right!
I'm hoping someone has some experience using the comments social plugin, specifically with regards to formatting the story Facebook publishes when a user leaves a comment.
I had expected the process would be exactly the same as the Like plugin, whereby I make sure the URL I'm using in the comments plugin points to a page that contains a bunch of OG meta tags, all correctly supplied and defined. Yet despite having set this up (and working fine with Like buttons), and having ran the target URL through the Linter tool and seeing everything appear as I expect (no warnings or errors either), whenever I have a test user leave a comment and publish the story to their wall all I see is the comment they left and the full URL link displayed underneath.
It's pretty ugly on the one hand, and confusing on the other. All the meta data is present AFAIK and as I say, it works perfectly fine with the Like button; I get a nice image, title/description text etc.
Here's the meta data I'm using (note: the URL and IMAGE meta tags are dynamically written in depending upon some querystring parameters in the comments plugin url I'm using. I've also replaced potentially sensitive values for dummy values):
<meta property="fb:app_id" content="MY-APP-ID">
<meta property="og:type" content="article">
<meta property="og:url" content="https://apps.facebook.com/MY-APP/?key1=val1&key2=val2&key3=val3&key4=val4">
<meta property="og:site_name" content="My Site">
<meta property="og:image" content="http://domain.com/myimage.jpg">
<meta property="og:title" content="My title">
<meta property="og:description" content="Some description here">
<meta property="article:published_time" content="1341126000">
<meta property="article:expiration_time" content="1356940800">
<meta property="article:author" content="http://www.mywebsite.com/">
<meta property="article:section" content="My Section">
<meta property="article:tag" content="My Tag">
Is it that comments only create basic stories in the user's feed (seems unlikely). Do I have to use "blog" or "website" as the "og:type" (seems unlikely too)?
Would appreciate any help!
Cheers
Lee
It turns out the problem was caused by the Page being set to unpublished, and the test app set to sandbox mode. I've no idea why the Comments social plugin has this problem when Like button social plugin doesn't, but hopefully this might help someone else with the same issue.
<meta property="og:url" content="https://apps.facebook.com/MY-APP/?key1=val1&key2=val2&key3=val3&key4=val4">
The og:url you specified could be causing your issue. This url is supposed to be the same domain as set in your app settings. As a test, change the url, post a comment.
Leaving the url this way, Facebook tries to scrape the canvas. This will cause undesired results.
I have listing pages that take a page argument on the url like the following:
http://www.domain.com/foo/bar/?page=7
Should I just include the URL without params or should I list all pages in my sitemap.xml?
EDIT
Paginated content are listings, like an index. Therefore their content is also (in more detail) found in detail pages. But these paginated ones are the only way to reach detail pages.
I really wanted to find you a reliable source for this one, but I couldn't. Which means you'll have to make do with my intuition:
If the articles exist only in their paginated form, and you want them to be indexed as separate pages, list them all. They'll all have distinct content on them, so you won't be penalised for duplication.
I found details of one exception; including page 1 twice. Basically you need to choose whether the first page will be /foo/bar/?page=1 or just /foo/bar/, then do a 301 redirect from the version you don't want to use.
Hope this helps (even just a little).
Tom
NO!: You should add Meta-Tags to you paginated sites. This helps google to understand your pagination system.
Example:
On page 1 you would add into <head>:
<link rel="next" href="http://www.example.com/article?story=abc&page=2" />
On page 2 you would add:
<link rel="prev" href="http://www.example.com/article?story=abc&page=1" />
<link rel="next" href="http://www.example.com/article?story=abc&page=3" />
On page 3 you would add:
<link rel="prev" href="http://www.example.com/article?story=abc&page=2" />
<link rel="next" href="http://www.example.com/article?story=abc&page=4" />
And on page 4 you would add:
<link rel="prev" href="http://www.example.com/article?story=abc&page=3" />
See this document: Pagination with rel=“next” and rel=“prev”
In this case the ?page=7 probably relates to the content management systems page. In you site map file you can add this. In the site map if you want each of these pages to be displayed in what ever uses this file yes you should add them.