how does Server Side Rendering works on Nuxt when using dynamic slugs? - seo

I've been developing an app that users can create a profile and it can be accessed from a URL that contains their username as a slug: https://myexample.com/username
The app is working with an API and every time the user access to the above URL, by using AsyncData from Nuxt. I'm able to get the username and do an API request to get all the information from the user and display it properly.
My concerns are, by not having a static URL to access the users' profiles, is the URL ranking on google?
My analysis is that by loading the URL only with a dynamic slug, Google will not be able to notice the existence of all the users that have a URL, but I'm not sure if my thinking is correct.
My goal is to let the users find their profiles online using Nuxt SEO advantages but I'm not sure if I'm using the correct approach.
Any feedback on this will help a lot, thanks in advance.

Google will only index pages it can find when crawling your site. If no links to the users pages exist anywhere on the site, Google has no knowledge of them and they won't be indexed.
That said, you can create a sitemap file and submit this to Google, so that it has a list of all the pages you would like it to index. This way, no internal links are required. Manually creating a sitemap for websites with a large number of dynamic pages can be tedious, however there are usually tools available to automate this depending on your setup.
EDIT
As you tagged this question with Nuxt, you could take a look at #nuxtjs/sitemap.

Related

Google indexing user profile pages of my website

I have a linkedIn like website. I need Google index each public profile pages.
Few things in my mind.
1- Since there is no user listing on my page. And also I dont want to show all profile listing on the website.
How google will index all profile pages?
If you don't want to make a list accessible for everyone, you should probably make a sitemap.xml that Google can read. If you are using Webmaster Tools you could tell Google your URL to it, and they will index all URLs.
Read more about sitemaps here:
https://support.google.com/webmasters/answer/183668?hl=en

Tracking query strings in Shopify, using webhooks and admin options

I was redirected here by Shopify support. I have three main questions for a project I'll be working on and wanted to see how possible some of the things would be.
We are looking to develop a plugin for use with Shopify to track purchases through the use of a link shortener (to see which link referred what purchases, etc.). I have a few questions that I'm not 100% sure on even after reading through the documentation.
The first problem that I seem to have is tracking the query string that the link shortener appends to the URL once it redirects. For this service, they use "?visit_id={hash}" and I need to be able to access this--at the very least on the "Thank You" page after an order. I saw in the docs that there is "landing_page_ref" (http://wiki.shopify.com/Order#landing_site_ref) but considering our query string is "visit_id" instead of one of the acceptable parameters, how would I be able to use that query string?
Lastly, I just have a question about how webhooks work with plugins that are on the app store. I know I can just call webhooks to wherever I want, like my personal server, but if this app gets onto the app store, I obviously don't want to hook everything to my own server. Is there a way to make it run on the store itself, and which URL should I use?
Lastly, what is the preferred method for handling configuration options for the plugin? Is there a way to hook into the admin backend or would all configuration have to be in a file within the plugin?
Thanks,
Andrew
I'll do my best to answer these for you. It sounds like you're used to building plugins for something like Wordpress - Shopify apps are a bit different.
You can't access anything on the thank you page for the order.
The thank you page/checkout process goes through a secured Shopify page that you don't have access to - so if you want information about what your URL shortener attached to the store pages, you'll need to retrieve it while they're on the page (using something like a ScriptTag + Javascript to track the query string), or hope that it's inside the Order when you retrieve it later (using the API or a webhook).
Webhooks need to talk to a server you run.
They send the information to you, and then you process it and deal with it. If you want to use webhooks, you will need to run a server with your app on it for the webhooks to talk to.
You manage your own config.
Because you're running your own server to handle those webhooks, you handle configuration for your plugin there. The apps I've worked on typically have their own database for managing configuration options, as well as an admin panel to manage them (it's what the user accesses when they click 'Log Into [Your App]' on the "Manage Apps" screen).
You'll need to run your own server to host your Shopify app.

SEO optimization for content generated by Javascript?

I have created widgets for my website(xyz.com), which can be embedded in different websites. Let's say I embed a widget which is a photo album, in another website, abc.com. The content is residing on xyz.com but is pulled via Javascript into abc.com.
Will the content generated by the widgets (Javascript) on abc.com will be indexed by search engines?
Google will not index anything that is not visible when a page is loaded with JavaScript disabled.
There is more information in this similar question:
google indexing text retrieved by ajax or javascript after page load
Also, you can test what Googlebot 'sees' by using the "Fetch as Googlebot" feature of Google Webmaster Tools.
If you want Google to index your Ajax, you can read Google's recommendations here:
http://code.google.com/web/ajaxcrawling/docs/getting-started.html
If you follow Google's scheme for Making AJAX Applications Crawlable, then Google will index content that's generated with Javascript. So will Bing and Yandex.
Implementing this scheme is somewhat involved which is why there are companies that provide it as a service that plugs in at webserver level. (I work for one of these: https://ajaxsnapshots.com

Is this a blackhat SEO technique?

I have a site which has been developed completely in flash. Now the site owners do not want to shift to a more text/html based site. So am planning to create an alternative html/text based site which the googlebot will get redirected to. (By checking the useragent). My question is that is this allowed officially by google?
If not then how come there are many subscription based sites which display a different set of data to google compared to the users? Is that allowed?
Thank you very much.
I've dealt with this exact scenario for a large ecommerce site and Google essentially ignored the site. Google considers it cloaking and addresses it directly here and says:
Cloaking refers to the practice of presenting different content or URLs to users and search engines. Serving up different results based on user agent may cause your site to be perceived as deceptive and removed from the Google index.
Instead, create an ADA compliant version of the website so that users with screen readers and vision aids can use your web site. As long as there as link from your home page to your ADA compliant pages, Google will index them.
The official advice seems to be: offer a visible link to a non-flash version of the site. Fooling the googlebot is a surefire way to get in trouble. And remember, Google results will link to the matching page! Do not make useless results.
Google already indexes flash content so my suggestion would be to check how your site is being indexed. Maybe you don't have to do anything.
I don't think showing an alternate version of the site is good from a Google perspective.
If you serve up your page with the exact same address, then you're probably fine. For example, if you show 'http://www.somesite.com/' but direct googlebot to 'http://www.somesite.com/alt.htm', then Google might direct search users to alt.htm. You don't want that, right?
This is called cloaking. I'm not sure what the effects of it are but it is certainly not whitehat. I am pretty sure Google is working on a way to crawl flash now so it might not even be a concern.
I'm assuming you're not really doing a redirect but instead a PHP import or something similar so it shows up as the same page. If you're actually redirecting then it's just going to index the other page like normal.
Some sites offer a different level of content -- they LIMIT the content, they don't offer alternative and additional content. This is done so it doesn't index unrelated things generally.

How to find inbound links to a given URL on the fly?

Technorarati's got their Cosmos api, which works fairly well but limits you to noncommercial use and no more than 500 queries a day.
Yahoo's got a Site Explorer InLink Data API, but it defines the task very literally, returning links from sidebar widgets in blogs rather than just links from inside blog content.
Is there any other alternative for tracking who's linking to a given URL (think of the discussion links that run below stories on Techmeme.com)? Or will I have to roll my own?
Well, it's not an API, but if you google (for example): "link:nytimes.com", the search results that come back show inbound links to that site.
I haven't tried to implement what you want yet, but the Google search API almost certainly has that functionality built in.
Is this for links to Urls under your control?
If so, you could whip up something quick that logs entries in the Referrer HTTP header.
If you wanted to do to this for an entire web site without altering application code, you could implement as an ISAPI filter or equivalent for your web server of choice.
Information available publicly from web crawlers is always going to be incomplete and unreliable (not that my solution isn't...).