SEO optimization for content generated by Javascript? - seo

I have created widgets for my website(xyz.com), which can be embedded in different websites. Let's say I embed a widget which is a photo album, in another website, abc.com. The content is residing on xyz.com but is pulled via Javascript into abc.com.
Will the content generated by the widgets (Javascript) on abc.com will be indexed by search engines?

Google will not index anything that is not visible when a page is loaded with JavaScript disabled.
There is more information in this similar question:
google indexing text retrieved by ajax or javascript after page load
Also, you can test what Googlebot 'sees' by using the "Fetch as Googlebot" feature of Google Webmaster Tools.
If you want Google to index your Ajax, you can read Google's recommendations here:
http://code.google.com/web/ajaxcrawling/docs/getting-started.html

If you follow Google's scheme for Making AJAX Applications Crawlable, then Google will index content that's generated with Javascript. So will Bing and Yandex.
Implementing this scheme is somewhat involved which is why there are companies that provide it as a service that plugs in at webserver level. (I work for one of these: https://ajaxsnapshots.com

Related

how does Server Side Rendering works on Nuxt when using dynamic slugs?

I've been developing an app that users can create a profile and it can be accessed from a URL that contains their username as a slug: https://myexample.com/username
The app is working with an API and every time the user access to the above URL, by using AsyncData from Nuxt. I'm able to get the username and do an API request to get all the information from the user and display it properly.
My concerns are, by not having a static URL to access the users' profiles, is the URL ranking on google?
My analysis is that by loading the URL only with a dynamic slug, Google will not be able to notice the existence of all the users that have a URL, but I'm not sure if my thinking is correct.
My goal is to let the users find their profiles online using Nuxt SEO advantages but I'm not sure if I'm using the correct approach.
Any feedback on this will help a lot, thanks in advance.
Google will only index pages it can find when crawling your site. If no links to the users pages exist anywhere on the site, Google has no knowledge of them and they won't be indexed.
That said, you can create a sitemap file and submit this to Google, so that it has a list of all the pages you would like it to index. This way, no internal links are required. Manually creating a sitemap for websites with a large number of dynamic pages can be tedious, however there are usually tools available to automate this depending on your setup.
EDIT
As you tagged this question with Nuxt, you could take a look at #nuxtjs/sitemap.

Do i need to Submit Separate (Mobile) Sitemap for AMP pages?

before responsive design we need mobile specific sitemaps, but with responsive design they were not needed.
But with introduction of Accelerate Mobile Pages (AMP), we are again having mobile specific URLs, so my questions are:
Do we need Separate (Mobile) Sitemap for AMP pages?
if yes, then what schemas we should use?
old schema http://www.google.com/schemas/sitemap-mobile/1.0? or something new?
No need providing you have a rel="amphtml" link in your regular page to tell crawlers the AMP HTML version as discussed here:
https://www.ampproject.org/docs/guides/discovery.html
Similarly your AMP pages should have a rel="canonical" link to point to real page, to avoid search engines thinking you have duplicate content.
In fact for Google, in the Google Search Console for you site there is an AMP section (under Search Appearance section) that shows all AMP pages it has found and if there are any problems with them.
As BazzaDP said their no need for separate sitemap.But you need to add rel="amphtml" to the top of the page. But it is good to have separate site map for AMP page, the major reason is Google crawler will learn how your site interacts having a separate sitemap for amp will make it easy for Google Crawler to detect and display in search result though it is not necessary. My opinion if making sitemap for amp page is difficult for your stack leave it, If it not do it. As this will allow other search engine to detect easily. Creating separate sitemap doesn't give you any advantage.
As for your question, there is no need for it.

Could ASP.NET Web Api be bad for SEO?

Will Web API based website suffer SEO problems?
Given that all content of a page is being pulled by javascript...
will search engine crawlers be able to get the page content?
I heard that crawlers do not always support javascript or perform javascript when crawling on a page.
It's not Web API that is bad for SEO, it's choosing an architecture where you use a web browser to navigate to empty HTML pages and then use JS to pull in the data. ASP.NET Web API does not have to be used that way.
You can't blame a hammer for building a bad house.
Depends.
Will ALL search engine crawlers be able to get the page content, I do not know.
Do the search engine crawlers that matter get the page content, yes.
Google and Bing combined own the search market, both can index content pulled in by javascript (and probably others as well).
Robert Scavilla on how content is indexed.
Search Engine Land on google executing javascript for indexing.

Shopify search engine add-on

I'm working on a search engine add-on.
Is it possible to add my search engine add-on into shopify frontpage?
I have research http://www.searchifyapp.com, how they can customize shopify search page?
I am not familiar with searchifyapp.com, so I won't tell you how it works, but I can tell you how it could be done.
If you want to have the shop's data indexed on your search server, then you will want to import the data from shopify on installation and use webhooks for updates. The Syncing with a Store Shopify Wiki page explains how this is done.
You can use ScriptTags to inject javascript into the storefront. Then the javascript can find and enhance/modify the search field/form (e.g. for autocompletion/suggestions as they type, or modify the url for the search results page).
If you want to custom search results (e.g. from your own search server), then you could create an Application Proxy to serve results from your own web application.

How to find inbound links to a given URL on the fly?

Technorarati's got their Cosmos api, which works fairly well but limits you to noncommercial use and no more than 500 queries a day.
Yahoo's got a Site Explorer InLink Data API, but it defines the task very literally, returning links from sidebar widgets in blogs rather than just links from inside blog content.
Is there any other alternative for tracking who's linking to a given URL (think of the discussion links that run below stories on Techmeme.com)? Or will I have to roll my own?
Well, it's not an API, but if you google (for example): "link:nytimes.com", the search results that come back show inbound links to that site.
I haven't tried to implement what you want yet, but the Google search API almost certainly has that functionality built in.
Is this for links to Urls under your control?
If so, you could whip up something quick that logs entries in the Referrer HTTP header.
If you wanted to do to this for an entire web site without altering application code, you could implement as an ISAPI filter or equivalent for your web server of choice.
Information available publicly from web crawlers is always going to be incomplete and unreliable (not that my solution isn't...).