I'm trying to enable Google Sitelinks Search Box.
It's something which allows you to display directly a "search textbox" in Google : https://developers.google.com/structured-data/slsb-overview
I've added it to the website the 27 July, using the JSON-LD syntax. According to Google, the currently cached version of website is the one of the 6 August (so a long time after deployment), but the search textbox has never appeared in Google.
<script type="application/ld+json">
{
"#context": "http://schema.org",
"#type": "WebSite",
"url": "https://fr.blabla.com/",
"potentialAction":
{
"#type": "SearchAction",
"target": "https://fr.blabla.com/acheter/{search_term_string}?page=1",
"query-input": "required name=search_term_string"
}
}
</script>
Sorry, I can't disclose the real URL. On the screenshot, the subdomain and the end of the URL are the real ones, the website is in HTTPS.
The "search engine" URL works fine if I call it directly.
The code is located in the head HTML segment of my page (it's not easy to test to move it, the website is a huge e-commerce platform and I can't test anything as I would).
The Google test tool (https://developers.google.com/structured-data/testing-tool/) seems to validate my code: http://i.stack.imgur.com/PyYpL.jpg
I found another website (type "cdiscount" in Google) which use it exactly like me and it looks like to work for them. The only differences I notice is they are not in HTTPS, their subdomain is www and their <script> tag is somewhere in the body.
There's two thing to consider here...
It can take Google a while to implement your code. I've seen it take a couple of months.
Adding sitelinks in the SERPs is at Google's discretion so having the code in your site is not a guarantee they will show sitelinks for the site.
So I'm afraid it's now a waiting game.
Good luck.
Actually it only takes a few days at the most after you use Fetch as Google to have the site re-crawled.
The documentation states that it is at their discretion and may depend on how busy your site is, but mine has been running under a year and isn't busy and they work. (I was surprised since I could not confirm this elsewhere).
A common mistake is not to realize how to check sitelinks, you need to search for the full website name eg blabla.com or blabla - using site:blabla.com won't show the site links.
If you want to see the Search text box as well make sure you only add the code to the home page of your site.
Related
I'm currently in the process of writing a REST API and this question always seems to popup.
I've always just added a description, quick links to docs, server time etc, but see now (after looking around a bit) that a simple redirect to the API docs would be even better.
My question is what would be the accepted norm to have as the root - '/' - "homepage" of your API?
I've been looking at a few implementations:
Facebook: Just gives a error of "Unsupported get request.";
Twitter: Shows an actual 404 page;
StackOverflow: Redirect to quick "usage" page.
After looking at those it's clear everyone is doing it differently.
In the bigger picture this is of little significance but would be interesting to see what the "RESTfull" way of doing it (if there is one) might be.
Others have had the same question and as you discovered yourself everyone is doing it their own way. There is a move in this direction to somehow standardize it, so see if you find this draft useful:
Home Documents for HTTP APIs aka JSON Home.
I've give this much thought and right now I either return a 404 page, a health status page, a dummy page or redirect to another page, mostly likely on within the organization.
An API homepage isn't something everyone should be looking at and believe me, it can be found. There are more people like me that love to inspect the browser and see how a website is performing.
I am new to SEO, I had done a research and read several guids, but I am still confused.
A google guid said
Avoid creating complex webs of navigation links, e.g. linking every
page on your site to every other page.
I have an e-commerce website. We intend to create a page for each issue of a magazine. issue pages will have Next and Previous link buttons which will move from one issue to another.
Is that a bad idea, Am I violating this rule? or Google is talking about another scenario?
Is that will cause indexing all the 1000 issues? Given that the links are dynamic and I will use URL rewriting.
Thanks
This won't be a problem with Google. They clearly explain why it is a good thing to do and how to do it properly.
If you want to fully control your linkjuice transmition and the landing page from Google with a little website, using this method is not recommanded.
But, if it's for website with more than 1k of unique pages (you can't fully control and influence the webcrawler comportment) you can use this to ease the crawler indexing work and the landing page for users.
Pagination can be a fairly complicated aspect of SEO, especially for ecommerce sites.
Here are a few general tips:
If you have a "view all" page, you probably should rel="canonical" all your paginated pages to that page. This is acceptable because the content is identical
If you don't have a "view all" page, but you want Google to treat the first page as the "canonical" or you want to drive all users to the first page, then use the rel=next/prev attributes to "group" together your like pages
For ecommerce faceted navigation, you should probably use a combination of rel=next/prev and query parameter controls through Google Webmaster Tools
In the June 2012 SMX Advanced conference, there were a few good presentations and live blogging posts that highlights a number of these aspects. More notably, Googler Maile Ohye spoke during that conference ... she's sort of the Queen of Pagination ;)
http://www.slideshare.net/audette/seo-for-pagination-faceted-navigation-canonicalization-hits-and-misses
http://outspokenmedia.com/internet-marketing-conferences/pagination-canonicalization-for-the-pros-smx-advanced-2012/
http://www.bruceclay.com/blog/2012/06/pagination-canonicalization-for-the-pros-smx-advanced/
You might also want to watch this Google video with Maile talking about Pagination http://googlewebmastercentral.blogspot.com/2012/03/video-about-pagination-with-relnext-and.html
Last thing to note ... Bing doesn't support rel=next/prev at this time: http://searchengineland.com/no-bing-doesnt-support-pagination-attributes-to-consolidate-pages-in-a-series-118694
If I understand you correctly YES Google is talking about another scenario.
The Next and Previous links on the issue pages, used for navigation from one issue to another are different from <link rel="next" ... > and <link rel="previous" ... > which appear in the <head> ... </head> section of html source.
Google will treat webpages with <link rel="next" ... > and or <link rel="previous" ... > as a series of pages.
I have a website and in my website I have, for example, a list of Audi models. I saw, using google webmaster tools, that my website appears in the google search by the word audi, but the target page was the 22nd page from my result set, not the first. I need my first page to appead, not my last (or middle), but I cannot tell google that this is a parameter, because my URLs are rewritten using mod rewrite. Any ideas?
BTW, I have read in a SEO forum, that it's a bad idea to use a cannonical tag. So is it really a bad idea in my case?
You can't force Google to do anything, however, they have made it easier to deal with pagination issues with a recent post on rel="next" and rel="prev".
But the primary problem you face is signalling to Google that your first (main) page is the starting point - this is achieved using internal link and back-link "juice" focussed on that page. You need to ensure that the first page of results is linked to properly from higher-value pages (like the home-page).
Google recently announced that you can use View All which will allow them to find and index entire articles that are normally broken up using pagination and display them all as one result.
I'm maintaining an existing website that wants a site search. I implemented the search using the YAHOO API. The problem is that the API is returning irrelevant results. For example, there is a sidebar with a list of places and if a user searches for "New York" the top results will be for pages that do not have "New York" in the main content section. I have tried adding Yahoo's class="robots-nocontent" to the sidebar however that was two weeks ago and there has been no update.
I also tried out Google's Search API but am having the same problem.
This site has mostly static content and about 50 pages total so it is very small.
How can I implement a simple search that only searches the main content portions of the page?
At the risk of sounding completely self-promoting as well as pushing yet another API on you, I wrote a blog post about implementing Bing for your site using jQuery.
The advantage in using the jQuery approach is that you can tune the results quite specifically based on filters passed to the API and playing around with the JSON (or XML / SOAP if you prefer) result Bing returns, as well as having the ability to be more selective about what data you actually have jQuery display.
The other thing you should probably be aware of is how to effectively use #rel attributes on your content (esp. links) so that search engines are aware of what the relationship is between the actual content they're crawling and the destination content it links to.
First, post a link to your website... we can probably help you more if we can see the problem.
It sound like you're doing it wrong. Google Search should work on your website, unless your content is hidden behind javascript or forms or something, or your site isn't properly interlinked. Google solved crawling static pages, so if that's what you have, it will work.
So, tell me... does your site say New York anywhere? If it does, have a look at the page and see how the word is used... maybe your site isn't as static as you think. Also, are people really going to search your site for New York? Why don't you input some search terms that are likely on your site.
Another thing to consider is if your site is really just 50 pages, is it really realistic that people will want to search it? Maybe you don't need search... maybe you just need like a commonly used link section.
The BOSS Site Search Widget is pretty slick.
I use the bookmarklet thing but set as my "home" page in my browser. So whatever site I'm on I can hit my "home" button (which I never used anyway) and it pops up that handy site search thing.
i have a blog build in wordpress, And my domain name is like example.com (i can't give you the original name, because some times the editors will mark this question as SPAM :( , and if any one really want to check directly from my site will add at the end of the question.)
http://example.com and the blog name is http://example.com/articles/
and the sitemap.xml is available in http://example.com/sitemap.xml
Google daily visit my site and all my new articles were crawled, if i search the "articles title + example.com " will get the search result from the google , its my site. but the heading is not the actual one. its getting from another article's data.
(i think can give you a sample search query, please don't take this as a spam)
Installing Go Language in Ubuntu+tutorboy - But this will list with proper title after a long title :(, I think now you understood what i am facing ... please help me to find out why this happens.
Edit:
How can i improve my SEO with wordpress?
When I search that query I don't get the page "Installing Go...", I get the "PHP header types" article, which has the above text on the page (links at the right). So the titles showing in Google are correct.
Google has obviously not crawled that page yet since it's quite new. Give it time, especially if your site is new and/or unpopular.
Couple of things I need to make clear:
Google crawled your site on 24 Nov 2009 12:01:01 GMT, so needless to say Google actually does not visit your site(blog)everyday.
When I queried the phrase you provided, the results are right. There are two url relates to your site. One is home page of your blog, another is the page that is (more closely)related to your query. The reason is the query phrase is directly related to the page of tutorboy.com/articles/php/use-php-functions-in-javascript.html, however, in your home page there are still some related keywords. That is the reason why Google presents two pages on the result page.
Your second question is hard to answer since it needs a complicated answer. Still, the following steps are crucial to your SEO.
Unique and good content. Content is king, and it is the subject that remains consistent in the whole time while another elements are changing with the evolving of search engine technology. Also keep your site content fresh.
Back links. Part of the reason that Google does not visit your site after your updating your site is your site lacks enough back links.
Good structure. Properly use those tags like<t>, <description>,<alt>etc.
Using web analysts tools like Google Analysts. It free, and you can see lot of things that you missed.
Most importantly, grabbing some SEO books or spending couple of minutes everyday to read some SEO articles.
Good Luck,