i want to make my multilingual site searchable which is fully static . The site shares 3 languages . I use a technique which captures unique tag content with its respective locale file ( it's in yml) like
In my view files
<%= t ".unique_key" %>
In my locale file
en:
filename:
unique_key: "This is unquie content which i want to search"
Currently , I tried googling but didn't found any solution. The only thing which my minds clicks is make a my local site site crawler .
How did you solve this problem ?
This post by Google might help: Working with multilingual websites.
The Crawling and Indexing your multilingual site section should answer your question.
Related
I have recently launched a website & therefore trying to figure out the Seo tricks to make it more visible. I use prerender.io to render javascript.
Can you please tell me how to show extended url results besides the main website link? Is there anything specific i need to do to get the results in the particular format?
For Example : Here main url is Google Voice & rest extended urls.
Well , There is no rules for this structure. Often, my old sites got structured but not the new one.
Google have their own theory for make this structure.
I have question/answer website where each question has a link.
My problem is how do I fed this link to google ?
Should I write link in "site.xml" or "robot.xml" ?
What is standard solution to this problem ??
Thanks
Amit Aggarwal
Some advices:
First make sure your website is SEO friendly and is crawl-able by search engines.
Second make sure to publish your webpage site-map to Google.
To do that add your site to Google Webmaster and submit your sitemap (XML, RSS, ATOM feed formats).
Consider using URL rewriting tool to convert your URLs from DYNAMIC to more SEO and user friendly version:
Example:
FROM:
example.com/product?id=100
TO:
example.com/nameproduct
Related information:
https://support.google.com/sites/answer/100283?hl=en
https://support.google.com/webmasters/answer/183668?hl=en
i have business listing site (www.brate.com) where people can search for local businesses and rate them.
the entire site is build using GWT (i.e. Ajax) and the all content is generated dynamically. Now i am in a phase where i want the site to be SEO friendly, below is my approach and please advise me if its the best way to implement it.
1- create static HTML snapshot of each business and its related data (site, address, phone number, user reviews...etc) and put all the generated HTML files under a single directory
2- create a sitemap xml file that contains all the above HTML links
3- configure webmaster to crawl and index all generated HTML snapshots
now my logic is that when google search query list one of the above generated html files in its search results i want to redirect the user to the site main page (www.brate.com) not the html snapshot.
can i use a redirect like "" in the generated snapshots?
if not what is the best way to achieve the above mentioned logic?
Thanks
Sameeh, one suggested approach for GWT
Ensure that you have correctly handled history tokens for all your pages in GWT. Let the tokens start with exclamation (!).
Associate GWT history tokens with generated pages using #! notation
Let tokens be keyword rich as we do for any URL optimization in SEO
Read through https://developers.google.com/webmasters/ajax-crawling/ for understanding #! notation.
Details on support by Bing: http://searchengineland.com/bing-now-supports-googles-crawlable-ajax-standard-84149
I ran my website through a web tool that evaluates SEO weight of elements and in the report it says that certain parts, like Description and other meta tags are missing... Also as a thumbnail of my site it shows a default server page. At the same time it shows the list of other pages that are linked from index page.
I checked and this AGENT is not blocked in robots.txt
Now, how can that be?
Demo
I think that the description issue is caused by the the fact that you are using "META" instead of "meta" in your meta tags.
there are many sites out that can run similar tests on your site such as the one you provided. It is just that site showing old data, you may want to submit your sitemap.xml to Bing & Google Webmaster Tools. If your site doesn't have a sitemap.xml file you may want to consider creating one.
I have recently started using Google Webmaster Tools.
I was quite surprised to see just how many links google is trying to index.
http://www.example.com/?c=123
http://www.example.com/?c=82
http://www.example.com/?c=234
http://www.example.com/?c=991
These are all campaigns that exist as links from partner sites.
For right now they're all being denied by my robots file until the site is complete - as is EVERY page on the site.
I'm wondering what is the best approach to deal with links like this is - before I make my robots.txt file less restrictive.
I'm concerned that they will be treated as different URLS and start appearing in google's search results. They all correspond to the same page - give or take. I dont want people finding them as they are and clicking on them.
By best idea so far is to render a page that contains a query string as follows :
// DO NOT TRY THIS AT HOME. See edit below
<% if (Request.QueryString != "") { %>
<META NAME="ROBOTS" CONTENT="NOINDEX, NOFOLLOW">
<% } %>
Do I need to do this? Is this the best approach?
Edit: This turns out NOT TO BE A GOOD APPROACH. It turns out that Google is seeing NOINDEX on a page that has the same content as another page that does not have NOINDEX. Apparently it figures they're the same thing and the NOINDEX takes precedence. My site completely disappeared from Google as a result. Caveat: it could have been something else i did at the same time, but i wouldn't risk this approach.
This is the sort of thing that rel="canonical" was designed for. Google posted a blog article about it.
Yes, Google would interprete them as different URLs.
Depending on your webserver you could use a rewrite filter to remove the parameter for search engines, eg url rewrite filter for Tomcat, or mod rewrite for Apache.
Personally I'd just redirect to the same page with the tracking parameter removed.
That seems like the best approach unless the page exists in it's own folder in which case you can modify the robots.txt file just to ignore that folder.
For resources that should not be indexed I prefer to do a simple return in the page load:
if (IsBot(Request.UserAgent)
return;