Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I want to create a web site builder. What I'm thinking is to have a one server as the main web server
my concept is as follows
1 - user enters url (http://www.userdomain.com)
2- it masks and redirect to one of my custom domain (http://www.myapp.userdomain.com)
3 - from the custom domain (myapp.userdomain) my application will identify the web site
3 - according to the website, it will render pages
my concerns are,
1 - is this the proper way of doing something like this (online web site builder)
2- since I'm masking the url i will not be able to do something like
'http://www.myapp.userdomain.com/products'
and if the user refresh the page it goes to home page (http://www.myapp.userdomain.com). how to avoid that
3- I'm thinking of using Rails, liquid for this. Will that be a good option
thanks in advance
cheers
sameera
Masking domains with redirects is going to get messy plus all those redirects may not play nice for SEO. Rails doesn't care if you host everything under a common domain name. It's just as easy to detect the requested domain name as it is the requested subdomain.
I suggest pointing all of your end-user domains directly to the IP of your main server so that redirects are not required. Use the the :domain and :subdomain conditions in the Rails router or parse them in your application controller to determine which site to actually render based on the hostname the user requested. This gives you added flexibility later as you could tell Apache or Nginx which domains to listen for and setup different instances of your application as to support rolling upgrades and things like that.
Sounds like this was #wukerplank's approach and I agree. Custom router to look at the domain name of the current request keeps the rest of your application simple.
There will you get some more help by getting site details of existing online site builder you can look upon [wix][1] , [weebly][2] , ecositebuilder and word press and many
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
I have a stack system that passes page tokens in the URL. As well my pages are dynamically created content so I have one php page to access the content with parameters.
index.php?grade=7&page=astronomy&pageno=2&token=foo1
I understand the search indexing goal to be The goal is to have only one link per unique set of data on your website.
Bing has a way to specify specific parameters to ignore.
Google it seems uses rel="canonical" but is it possible to use this to tell Google to ignore the token parameter? My URL (without tokens) can be anything like:
index.php?grade=5&page=astronomy&pageno=2
index.php?grade=6&page=math&pageno=1
index.php?grade=7&page=chemistry&page2=combustion&pageno=4
If there is not a solution for Google... Other possible solutions:
If I provide a site map for each base page, I can supply base URLs but any crawing of that page's links will crate tokens on resulting pages. Plus I would have to constantly recreate the site map to cover new pages (e.g. 25 posts per page, post 26 is on page 2).
One idea I've had is to identify bots on page load (I do this already) and disable all tokens for bots. Since (I'm presuming) bots don't use session data between pages anyway, the back buttons and editing features are useless. Is it feasible (or is it crazy) to write custom code for bots?
Thanks for your thoughts.
You can use the Google Webmaster Tools to tell Google to ignore certain URL parameters.
This is covered on the Google Webmaster Help page.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I'm going to have a site where content remains on the site for a period of 15 days and then gets removed.
I don't know too much about SEO, but my concern is about the SEO implications of having "content" indexed by the search engines, and then one day it suddenly goes and leaves a 404.
What is the best thing I can do to cope with content that comes and goes in the most SEO friendly way possible?
The best way will be to respond with HTTP Status Code 410;
from w3c:
The requested resource is no longer available at the server and no
forwarding address is known. This condition is expected to be
considered permanent. Clients with link editing capabilities SHOULD
delete references to the Request-URI after user approval. If the
server does not know, or has no facility to determine, whether or not
the condition is permanent, the status code 404 (Not Found) SHOULD be
used instead. This response is cacheable unless indicated otherwise.
The 410 response is primarily intended to assist the task of web
maintenance by notifying the recipient that the resource is
intentionally unavailable and that the server owners desire that
remote links to that resource be removed. Such an event is common for
limited-time, promotional services and for resources belonging to
individuals no longer working at the server's site. It is not
necessary to mark all permanently unavailable resources as "gone" or
to keep the mark for any length of time -- that is left to the
discretion of the server owner.
more about status codes here
To keep the traffic it may be an option to not delete but archive the old content. So it remains accessible by its old URL but linked at some deeper points in the archive on your site.
If you really want to delete it then it is totally ok to return with 404 or 410. Spiders understand that the resource is not available anymore.
Most search engines use something called a robot.txt file. You can specify which URLs and Paths you want the search engine to ignore. So if all of your content is at www.domain.com/content/* then you can have Google ignore that whole branch of your site.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I have a product page in my website which I have added 3 years before.
Now the product production was stopped and the product page was removed from website.
What I did is I started displaying message in the product page telling that the production of the product got stopped.
when some one searches in google for that products the product page which was removed from site shows up first in google search.
The page rank for the product page is also high.
I don't want the removed product page to be shown at the top of search result.
What is the proper method to remove a page from website so that it gets depicted by what ever google have indexed in its table.
Thanks for the reply
Delete It
The proper way to remove a page from a site is to delete the actual file that is been returned to the user/bot when the page is requested. If the file is not on the webserver, any well configured webserver will return a 404 and the bot/spider will choose to remove that from the index in the next refresh.
Redirect It
If you want to keep the good "google juice" or SERP ranking the page has, probably due to any inbound links from external sites, you'd be best to set your websever to do a 302 redirect to a similar (updated product).
Keep and convert
However, if the page is doing so well that it ranks #1 for searches to the entire site, you need to use this to your advantage. Leave the bulk of the copy on the page the same, but highlight to the viewer that the product no longer exists and provide some helpful options to the user instead: tell them about a newer, better product, tell them why it's no longer available, tell them where they can go to get support if they already have the discontinued product.
I am completely agree with above suggestion and want to add just one point.
If you want to remove that page from Google Search Result; just login to Google webmaster tool (you must have verified that website in Google webmaster tool) and add that particular page for index removal request.
Google will de-index that page and it will be removed from Google search rankings.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I have been provided the reseller club http api for the domain search.The url is as follows:
https://test.httpapi.com/api/domains/available.json?auth-userid=0&auth-password=password&domain-name=domain1&domain-name=domain2&tlds=com&tlds=net
Now I am not undersatnding how to use it and what it should be inplace of test.httpapi.com
and also when I use my domain name say
for
www.x.in
I use
x.httpapi.com with the valide parameters which makes the url
https://x.httpapi.com/api/domains/available.json?auth-userid=xxxx&auth-password=xxxxx&domain-name=test.com&domain-name=test2.com&tlds=com&tlds=net
It shows ssl error and when
www.x.httpapi.com/api/domains/available.json?auth-userid=xxxx&auth-password=xxxxx&domain-name=test.com&domain-name=test2.com&tlds=com&tlds=net
It shows ngix error
Please suggest
You can find most of this information buried inside the ResellerClub HTTP API Documentation, however, here is what you need to do in order to get going.
There is nothing wrong with the URL. That is the testing URL for them.
https://test.httpapi.com/api/domains/available.json?auth-userid=0&auth-password=password&domain-name=domain1&domain-name=domain2&tlds=com&tlds=net
...is the URL to check availability of domain names. You are supposed to replace the auth-userid parameter with your ResellerClub User ID (you can get this from the reseller control panel) and the auth-password parameter with you password.
If your user ID is 123456 and password is albatross then the URL will look like:
https://test.httpapi.com/api/domains/available.json?auth-userid=123456&auth-password=albatross&domain-name=google&tlds=com&tlds=net
This URL will output JSON on your browser screen.
This URL is not working because you need to add your IP address to the list of white-listed IP addresses to make API calls. You can find this setting inside your ResellerClub control panel. Go to Settings -> API and add your IP address. Within 30 minutes, this URL will start spitting JSON.
To use it within your code, you need to white-list the IP for your server within your control panel and make a CURL call or open a socket connection or something.
Let me know if this post helped.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Hi, Is there any API to lookup if a given domain name is already registerd by somebody and get alternative (auto suggested available domain names)?
EDIT:- I think the thing I need is called domain-search not the lookup :)
I've written a whois for PHP, Perl, VB and C# all using a trick that queries '{domain}.whois-servers.net'.
It works well for all but the obscure domains that require registration (and usually fees) such as tonga .tv or .pro domains.
PHP Whois (version 3.x but should still work)
C# Whois
COM Whois (DLL only, I lost the source)
This page shows it in action. You can do some simple string matching to check if a domain is registered or not based on the result you get back.
It's called whois... and for auto-suggestion, there is the domain service at 1&1.
http://www.mashape.com/apis/Name+Toolkit#Get-domain-suggestions - Advanced domain name suggestions and domain checking API.
I think you can use http://whois.domaintools.com/ to get the information. Send a web request as http://whois.domaintools.com/example.com and it will return the information of example.com. But you need to parse the response to filter the required information.
http://whois.bw.org/ is very good. does suggestions and such.
I want an api that I can call from my code or pages. The XML API on www.domaintools.com seems like the thing that I need. I'm looking into it.
thanks for your support. I've found a service by domaintools.com called whoisapi. You can query available domain names and other information by sending an xml request to their servers.