GET vs POST in SEO [closed] - seo

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
My web application retrieves a page for every request generated by a form submission. That form submits to the same URL of the page.
Each time the page loads with a different title tag. Does it indicate different pages with the same URL?
How does it affect SEO? how can I manage this situation?
Edit
This question is not purely SEO related no it requires SEO specific reasoning or answers it can be explained also technically how search engine robots work. if it still seems offtopic for moderators I request them to explain why

Try and use a rewiter rule to format your URL to a unqiune page if your always loading to the same page google ( or other search engines) will only index that single page.
http://www.seomoz.org/img/upload/anatomy-of-a-url.jpg

In addition to load the page each time with different title tag you need to append the URL with some uinque text like your GET variable data..
For getting crawled by spiders don't forget to submit your sitemap to search engines with relevant urls..

Related

Is it content multiplication? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
I have 2 url width the totally same content because I'd like if a subpage could reach in easily (short name) and valuable urls too:
domain.com/name_of_company
domain.com/name_of_company/address_of_company
And I have a third url:
domain.com/name_of_company/products_of_company
This page would be the same as domain.com/name_of_company but with more content at the top of the page.
What you think if it's good way or I should forget 'cause I'll be punished by google?
Thank you in advance!
It's perfectly fine to have identical pages, but then you should add a canonical link to the pages, which tells which URL you consider to be the original. That way search engines can immediately see that the pages were intended to be identical.
This will not only avoid being punished for duplicate content, it will also allow for the search engines to count incoming links to both URLs as links to the original page.
For the third page, you could consider if you need to repeat all the content from the other page. Having some content repeated from page to page is common (e.g. page footer), but too much repeated content is not valuable. Even if you are not directly punished for the repetition, there is a risk that some of the repeated content is simply ignored.

Dynamic Text replacement for PPC Campaigns [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
Stumbled across this: http://get.unbounce.com/dynamic-text-replacement/
Seems to be a wysiwg landing page creator. So I was just wondering if it's possible to do Dynamic Text replacement normally? Pretty much just need 1 Landing page that will have text (keywords) change depending on what the user searched from search engine. An dif possible also have images change out depending on what the user searched.
How can this be accomplished?
AdWords lets you use what is called "ValueTrack" in the click-through URLs for your ads. So you could have your clickthrough URL in AdWords as "http://www.example.com?keyword={keyword}" then when someone clicks on your ad, AdWords will replace {keyword} with the actual keyword from the search, or for display ads the best-match keyword.
You could then have some code on your site (could be client or server side - you could do it with Javascript) to look for the keyword query string parameter from the URL, extract the value of that parameter, then place it into a onto your site.
Hope that helps

How much SEF have to be the URLs? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I'm really attracted to the webservice.js module. I'd like to use it as a real webserver, using only an HTML page with JS calling the webservice to retrieve the datas.
The problem I'm seeing is about the search engines, as I do wish my website to be search engine optimized.
So I thought I could be fallbacking to plain HTML when JS is not enabled (just going to the url, webservice.js sends back some datas in plain HTML). For this, the links will be displayed in the HTML markup on the frontpage.
The problem is about how much SEF have the URLs to be?
I mean, the webservice will allow me to have URLs of this kind : http://domain.com/content?get=title-uri-encoded.
Is it search-engine friendly? I know having http://domain.com/content/title-uri-encoded would be better, but is the kind I'm thinking of still friendly?
PS : I'm not sure whether this post belongs to SO or Programmers.se...
You probably want to look into progressive enhancement techniques or Google's proposed AJAX solution.
You may end up with a URL structure like this:
AJAX enabled public version
http://domain.com/content#!get=title-uri-encoded
Search Engine version (plain html)
http://domain.com/content?_escaped_fragment_=get=title-uri-encoded

For SEO perspective dynamic content good or not [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 12 years ago.
Improve this question
i have 5 dynmic artcile in my home page, (random)
How quick google will read my content ,
First tell me Really Google will cron my content or not becoz am changes my content every page refresh ,
So i have this doubt ,
Google Will crawl random content Or Not ?
Thanks
Google will crawl anything. But if your content is random you'll soon get a Google ban/discount. Regularly changing content is good, random content is not.
Also, your content is only a small portion of your search results these days. Getting relevant links (links from websites with domain authority) has much more influence.

How should google crawl my blog? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 13 years ago.
Improve this question
I was wondering how (or if) I should guide Googlebot through my blog. Should I only allow visiting pages with single entries or should it also crawl the main page (which also has full entries)? My concern is that the main page changes when I add a new post and google keeps the old version for some time. I also find directing people to the main page annoying - you have to look through all the post before you find the one you're interested in. So what is the proper way to solve this issue?
Why not submit a sitemap with the appropriate <changefreq> tags -- if you set that to "always" for the homepage, the crawler will know that your homepage is very volatile (and you can have accurate change freq for other URLs too, of course). You can also give a lower priority to your homepage and a higher one to the pages you prefer to see higher in the index.
I do not recommend telling crawlers to avoid indexing your homepage completely, as that would throw away any link juice you might be getting from links to it from other sites -- tweaking change freq and priority seems preferable.
Make a sitemap.xml and regenerate it periodically. Check out Google Webmaster Tools.