Domain Specific URL include() exemption - apache

I want to allow a URL to be put inside php's include(); but only certain domains - not every one. The other domains are owned by me, and on a separate server. If someone tries to include() any other domain bu these I want them disallowed.
If this is not possible, is there a work around?

My recommendation? Don't do it with includes. Executing code in that fashion is like swallowing a chocolate covered cherry bomb.

You could do it on the web server so you don't even let the domain which is not allowed to parse or get any files. you have to remember any request will go through the web server and server static content or be parsed which all takes time and then add the time for the PHP script to execute.

$allowed_domains = array(
'stackoverflow.com',
'www.stackoverflow.com',
'facebook.com',
'www.facebook.com',
'google.com',
'www.google.com',
);
if(!in_array(parse_url($url, PHP_URL_HOST), $allowed_domains)) {
// throw an error
}

Related

Reducing Parse Server to only Parse Cloud

I'm currently using a self hosted Parse Server up to date but I'm facing some security issues.
At the moment, calls done to the route /classes can retrieve any object in any table and, even though I might want an object to be public readable, I wouldn't like to show all the parameters of that object. Briefly I don't want the database to be retrieved in any case, I would like to disable "everything" except the Parse Cloud code. So that is, I would be able to run calls to my own functions, but not able to use clients (Android, iOS, C#, Javascript...) to retrieve data.
Is there any way to do this? I've been searching deeply for this, trying to debug some Controllers but I don't have any clue.
Thank you very much in advance.
tl;dr: set the ACL for all objects to be only readable when using the master key and then tell the query in Cloud Code to use the MK when querying your data
So without changing Parse Server itself you could make use of ACL and only allow a specific user to access objects. You would then "login" as that user in your Cloud Code and be able to access all objects.
As the old method, Parse.Cloud.useMasterKey() isn't available in the OS Parse Server you will have to pass the parameter useMasterKey to the query you are running which should do the trick for this particular request and will bypass ACL/CLP. There is an example in the Wiki of Parse Server as well.
For convenience, here is a short code example from the Wiki:
Parse.Cloud.define('getTotalMessageCount', function(request, response) {
var query = new Parse.Query('Messages');
query.count({
useMasterKey: true
}) // count() will use the master key to bypass ACLs
.then(function(count) {
response.success(count);
});
});

returning absolute vs relative URIs in REST API

suppose the DogManagementPro program is an application written in client/server architecture, where the customers who buys it is supposed to run the server on his own PC, and access it either locally or remotely.
suppose I want to support a "list all dogs" operations in the DogManagementPro REST API.
so a GET to http://localhost/DogManagerPro/api/dogs should fetch the following response now:
<dogs>
<dog>http://localhost/DogManagerPro/api/dogs/ralf</dog>
<dog>http://localhost/DogManagerPro/api/dogs/sparky</dog>
</dogs>
where I want to access it remotely on my local LAN, [the local IP of my machine is 192.168.0.33]
what should a a GET to http://192.168.0.33:1234/DogManagerPro/api/dogs fetch?
should it be:
<dogs>
<dog>http://localhost/DogManagerPro/api/dogs/ralf</dog>
<dog>http://localhost/DogManagerPro/api/dogs/sparky</dog>
</dogs>
or perhaps:
<dogs>
<dog>http://192.168.0.33/DogManagerPro/api/dogs/ralf</dog>
<dog>http://192.168.0.33/DogManagerPro/api/dogs/sparky</dog>
</dogs>
?
some people argue that I should subside the problem altogether by returning just a path element like so:
<dogs>
<dog>/DogManagerPro/api/dogs/ralf</dog>
<dog>/DogManagerPro/api/dogs/sparky</dog>
</dogs>
what is the best way?
I've personally always used non-absolute urls. It solves a few other problems as well, such as reverse / caching proxies.
It's a bit more complicated for the client though, and if they want to store the document as-is, it may imply they also now need to store the base url, or expand the inner urls.
If you do choose to go for the full-url route, I would not recommend using HTTP_HOST, but setup multiple vhosts, and environment variable and use that.
This solves the issue if you later on need proxies in front of your origin server.
I would say absolute URLs created based on the Host header that the client sent
<dogs>
<dog>http://192.168.0.33:1234/DogManagerPro/api/dogs/ralf</dog>
<dog>http://192.168.0.33:1234/DogManagerPro/api/dogs/sparky</dog>
</dogs>
The returned URIs should be something the client is able to resolve.

Joomla. Infinite loop detected in JError after I moved website on a bluehost server and manually imported sql database for new domain?

I had to move a website on a bluehost server, that is in the Joomla Platform 1.7. I did alot of reasearch on Joomla.org and google and still haven't resloved the issue.
I recieve the following error: Infinite loop detected in JError. I went threw the configuration file and made sure that database and user match up with my new database parameters match up and I am still recieving this issue. Thank you. Urgent response will be very helpful. Currently working on this and this is not a good start. :(
I have ran into the same issue moving my Joomla site from one server to another. What I learned is the following;
Make sure you look at the parameters in configuration.php. Double check if the following variables in your configuration.php file is correct. (Double Check) You did mention that you already have done so. In that case I am 98% sure that you have an issue dealing with file Attributes with configuration.php. Change the Attributes of configuration.php from 444 to 666.
To get detailed information about the error, open the error.php file located in /libraries/joomla/error/ on your server.
In the following code:
public static function throwError(&$exception)
{
static $thrown = false;
// If thrown is hit again, we've come back to JError in the middle of
throwing
another JError, so die!
if ($thrown) {
// echo debug_print_backtrace();
jexit(JText::_('JLIB_ERROR_INFINITE_LOOP'));
}
change the line // echo debug_print_backtrace();
to the following:
**print"<pre>";
echo debug_print_backtrace();
print"</pre>";**
Remember- When you change the parameters in configuration.php to you new database make sure you that configuration.php file attribute is set to 666, Otherwise when you go to save the file it will not change. Try this first.. Good luck.

Modern File Structuring for Website Development

So I am fairly new to website development, PHP, Mysql, etc. so it's a given if I get some downvotes for my sheer lack of intelligence, I just want the answer haha.
I have probably jumped the band wagon or probably inherited a completely bad coding practice; instead of simplistic website structures such as stackoverflow.com/questions.php?q=ask (displaying content based on GET data), or making it even more simplistic such as stackoverflow.com/ask.php, etc, we have the seemingly straight forward stackoverflow.com/questions/ask
So what's the weird magic going on?
It's likely you're looking for mod_rewrite.
mod_rewrite will work but it really won't improve your actual file structure on the site (behind the scenes.)
For that you would use a PHP framework. I would suggest starting with CodeIgniter which is simpler than Zend. (I don't have experience with CakePHP so I won't comment on that.)
You will need to configure routing to trap a url so that it maps to a certain controller the use a function to capture the rest of the url as parameters.
function _remap($params = array()) {
return call_user_func_array(array($this, 'index'), $params);
}
Then, in the same controller change the index function like this:
function index($id = null) {
$data['question'] = /* get your data from the database */;
$this->load->view('index', $data);
return true;
}
This assumes you started with the welcome controller example in the zip file.
But, to answer your question more directly, there isn't really any magic going on. The browser is requesting a certain resource and the server is returning that resource according to it's own logic and how it is configured. The layout of files on the server is an internal issue, the browser only sees the representation of the server's state. Read up on the REST principle to understand this better.

How do you dynamically edit robots.txt in a load balanced environment?

Looks like we are going to have to start load balancing our webservers here soon.
We have a feature request to edit robots.txt dynamically which is not a problem for one host -- however once we get our load balancer up and going -- it sounds like I will have to scp the file over to the other host(s).
This sounds extremely 'bad'. How would you handle this situation?
I already let the client edit the meta tag 'robots' which (imo) should effectively do the same thing as he wants from the robots.txt editing but I really don't know that much about SEO.
Maybe there is a completely different way of handling this?
UPDATE
looks like we will store it in s3 for now and memcache it frontside...
HOW WE ARE DOING IT NOW
so we are using merb..I mapped a route to our robots.txt like so:
match('/robots.txt').to(:controller => 'welcome', :action => 'robots')
then that relevant code looks like this:
def robots
#cache = MMCACHE.clone
begin
robot = #cache.get("/robots/robots.txt")
rescue
robot = S3.get('robots', "robots.txt")
#cache.set("/robots/robots.txt", robot, 0)
end
#cache.quit
return robot
end
I might have the app edit the contents of robots.txt and have the user input saved to a database. Then at certain intervals, have a background process pull the latest from the DB and push to your servers.
An alternative would be to have the reverse proxy that is doing your load balancing treat robots.txt differently. You could serve it directly from the reverse-proxy or have all requests for that file go to a single server. It makes a lot of sense since robots.txt is going to be required relatively infrequently.
I'm not sure if you're home on this yet. If so ignore. (UPDATE: I see a note to your original post, but this may be useful reagrdless.)
If you mapped a call to robots.txt to an http-handler or similar, you can generate the response from say a dB.
serve it via whatever dynamic content generation you are using. its just a file . nothing special.