Proper way to forward domain from Server A to Server B - seo

Here's my situation.
I register myweb.ca (country specific) domain with Webhost Provider A because they allow ccTLD, while Webhost Provider B does not. I host my PHP files on Webhost Provider B at http://mysecretweb.com/myweb/ because I like them better (reliable, cheaper, proven etc...).
I want to achieve the following:
When user types http://myweb.ca/aboutus.html, they will see the contents of http://mysecretweb.com/myweb/aboutus.html
When user visits aboutus.html, the browser must display http://myweb.ca/aboutus.html, NOT http://mysecretweb.com/myweb/aboutus.html
The public and search engines CAN NOT BE AWARE of the domain http://mysecretweb.com/myweb because it is a secret.
Any solution offered must not negatively impact SEO
Will domain forwarding with masking solve my problem? Any suggestions?
Additional Detail
Someone suggested I change nameserver information from ns1.providerA.com to ns1.providerB.com. Someone else counter argued that provider B will prohibit this because provider A is not on the network, and that provider B may ban my account for doing this. I am confused...

You could write one PHP script that gets an URL from $_GET, downloads it and passes to user (including headers) - and then some .htaccess Rewrite magic to point everything to that script. This is about the only way that is entirely transparent to both humans and bots.

you could try and detect bots and humans apart and have diferent actions for the 2

Related

Pass data such as username in hostname

I have seen some sites use hostnames as data such as usernames (for example username.example.com) and was wondering how you would be able to achieve this.
Is it good practice to use hostnames like this or are there reasons against it?
Thanks in advance.
It is generally bad practice to treat hostnames this way. Lookups become a bit more complicated and it is always safest to use usernames in the path or query.
Hostnames are designed to be thought of in a global sense. For instance user.example.com/username/profile
It also helps protect the user (a little) because paths can be encoded into the http request where a subdomain request essentially requests user.example.com and that request can be redirected multiple times before returning to the client and dns monitoring is the number one way that people do tracking.
DNS tracking is easy because its already fast, open, and the contents aren't designed to be hidden like https or more recent ipsec techniques.
I've accomplished this by setting up a DNS wildcard with your DNS host (*.example.com) then using PHP to parse out the username in the URL and act accordingly.

How RESTful is using subdomains as resource identifiers?

We have a single-page app (AngularJs) which interacts with the backend using REST API. The app allows each user to see information about the company the user works at, but not any other company's data. Our current REST API looks like this:
domain.com/companies/123
domain.com/companies/123/employees
domain.com/employees/987
NOTE: All ids are GUIDs, hence the last end-point doesn't have company id, just the employee id.
We recently started working on enforcing the requirement of each user having access to information related exclusively the company where the user works. This means that on the backend we need to track who the logged in user is (which is simple auth problem) as well as determining the company whose information is being accessed. The latter is not easy to determine from our REST API calls, because some of them do not include company id, such as the last one shown above.
We decided that instead of tracking company ID in the UI and sending it with each request, we would put it in the subdomain. So, assuming that ACME company has id=123 our API would change as follows:
acme.domain.com
acme.domain.com/employees
acme.domain.com/employees/987
This makes identifying the company very easy on the backend and requires minor changes to REST calls from our single-page app. However, my concern is that it breaks the RESTfulness of our API. This may also introduce some CORS problems, but I don't have a use case for it now.
I would like to hear your thoughts on this and how you dealt with this problem in the past.
Thanks!
In a similar application, we did put the 'company id' into the path (every company-specific path), not as a subdomain.
I wouldn't care a jot about whether some terminology enthusiast thought my design was "RESTful
" or not, but I can see several disadvantages to using domains, mostly stemming from the fact that the world tends to assume that the domain identifies "the server", and the path is how you find an item on that server. There will be a certain amount of extra stuff you'll have to deal with with multiple domains which you wouldn't with paths:
HTTPS - you'd need a wildcard certificate instead of a simple one
DNS - you're either going to have wildcard DNS entries, or your application management is now going to involve DNS management
All the CORS stuff which you mention - may or may not be a headache in your specific application - anything which is making 'same domain' assumptions about security policy is going to be affected.
Of course, if you want lots of isolation between companies, and effectively you would be as happy running a separate server for each company, then it's not a bad design. I can't see it's more or less RESTful, as that's just a matter of viewpoint.
There is nothing "unrestful" in using subdomains. URIs in REST are opaque, meaning that you don't really care about what the URI is, but only about the fact that every single resource in the system can be identified and referenced independently.
Also, in a RESTful application, you never compose URLs manually, but you traverse the hypermedia links you find at the API endpoint and in all the returned responses. Since you don't need to manually compose URIs, from the REST point of view it's indifferent how they look. Having a URI such as
//domain.com/ABGHTYT12345H
would be as RESTful as
//domain.com/companies/acme/employees/123
or
//domain.com/acme/employees/smith-charles
or
//acme.domain.com/employees/123
All of those are equally RESTful.
But... I like to think of usable APIs, and when it comes to usability having readable meaningful URLs is a must for me. Also following conventions is a good idea. In your particular case, there is nothing unrestful with the route, but it is unusual to find that kind of behaviour in an API, so it might not be the best practice. Also, as someone pointed out, it might complicate your development (Not specifically on the CORS part though, that one is easily solved by sending a few HTTP headers)
So, even if I can't see anything non REST on your proposal, the conventions elsewhere would be against subdomains on an API.

Why is CORS based on the target server? Why do I have to use JSONP?

I would like a concrete example in an answer if possible.
For explanations sake we have three players here.
My Server (myserver.com)
Client Server (myclient.com)
Client User (accessing data through myclient.com)
I'm making a web service available to my clients that allows them to retrieve their data in JSON format. In order for their websites to work they have to use the standard XOR workarounds - either making the request server-side or relying on me to set
Access-Control-Allow-Origin: http://myclient.com
So two part question here. First, why do I set the origin policy at myserver.com? Why does my server care who it serves content up to? Shouldn't it be myclient.com that sets this? Concrete example here would be great.
Part two, I understand that JSONP works around this, but I'm worried about using it because I don't understand the security implications from part one. What is the point of JSONP if I can just set Access-Control-Allow-Origin: *?
Lots of questions!
JSONP is definitely dangerous if you intend to serve user-specific content. If the content the server is serving is completely public, and (probably) read-only, JSONP is a wise choice. Don't use it for anything that assumes a 'logged in state' or authentication/authorization.
CORS is definitely much better than JSONP, but it's not supported in every (older) browser. If you want to support as much as possible, you will need some kind of fallback. CORS allows you to do requests other than GET, which greatly improves flexibility.
The reason the target server needs to allow this, is mainly because javascript running on domain A, should not be able to access domain B. If domain A could 'allow' this, it implies you could create javascript applications that have access to the sandbox of any public server. Only the owner of domain B can explicitly allow the owner of domain A to access their content.
Your argument (why does domain B care who accesses their resources) would normally be valid. But this is not to protect domain B, it is to protect the end-user. Domain A should not be allowed to perform requests on behalf of the end-user to Domain B without explicit permission.
And just to be sure: unless you understand the security implications of JSONP quite well, CORS is likely a much safer choice.

Restrict unauthenticated access to files with mod_rewrite and scripting language

I have scavenged for the answers online but none seem to be similar to what I am trying to achieve. As such, I hope that gurus at stackoverflow can help me out.
What is it that I am trying to accomplish?
I want to restrict access to content for non-authorized users. Accessible content to non-authorized users will be specified in a white list. All other content is blacklisted.
What is my environment?
I am running Apache in conjunction with a scripting language very similar to that of PHP. The scripting language will not be known by many but it is Fazzt ( in case you do know and are able to infer the differences of it as compared to PHP... there are no pointers / memory management, decimal values, and binary data ). I have to use this environment due to the nature of the project.
What is happening on the site?
The site authenticates users and stores authentication in sessions. An unauthenticated user is presented with a styled ( contains images, css, js, etc ) webpage. Hence, I need to white-list all of the static images, css, js files in order for them to be available for download by the client browser. Once signed in, broader range of dynamic content is presented ( as such, anything that is not white-listed is automatically black-listed ).
How did I plan to solve the problem?
This is silly but I guess obvious is not always seen. My approach involved mod_rewriting all requests to existing files that do not match .fzt and .fsp pages. The rewrite would go to a scripting file that would check the requested file against the white list. If the file is present in the list, request would get routed directly to the file ( yes, silly me... it would get mod_rewritten again >_< ). If it's not in the list, user's authentication would be checked. If the user is not authenticated, "File not found" HTTP would be returned. Otherwise, the request would be redirected to the file and served ( same folly ).
As you can see, the approach is greatly flawed. However, I am sure something of the nature should be possible... yet, I have not found any proof just yet. What do you think? Is the mod_rewrite / script a completely wrong way of performing this task? How would you do it otherwise? Note that I cannot simply slap .htaccess as the access determined by user authentication that is tracked by Fazzt ( read above, scripting language similar to that of PHP ).
Any suggestions or thoughts would be greatly appreciated!

domain forwarding and seo

I want http://mynewdomain.com to forward with masking to http://mysecretdomain.com/mynewsite. When a user types in http://mynewdomain.com/aboutus.html, he should see the contents of http://mysecretdomain.com/mynewsite/aboutus.html.
I do not want the public to be aware of http://mysecretdomain.com.
Will the way I use forwarding and masking negatively affect SEO?
By using domain forward and masking, is there any danger of people becoming aware of mysecretdomain.com? (ie. will users discover the relationship between mynewdomain.com and mysecretdomain.com?)
Additional details
It is extremely important that no one discover the http://mysecretdomain.com/mynewsite domain and directory despite the fact that it is hosting all the content. Do I have to do anything to ensure this?
Why not just map your secret domain to the ~/www directory on your host, and the new domain to ~/www/newdomain? Then when you go to mysecretdomain.com/newdomain/ it looks in ~/www/newdomain/... exactly what you described, with no redirects.
Maybe I don't understand your goal here.