Zend 1.10 place websites in virtual subdirectories - apache

I have the following situation:
We have a webapp built with Zend Framework 1.10 that is available under www.domain.com/webapp
On the server filesystem, we really also have the webapp deployed in /srv/www/webapp
Now, for some reasons I can't detail too much, the project manager has requested, now that the app is finished, that each client recieves his own url litteraly.
So we would have:
www.domain.com/webapp/client1
www.domain.com/webapp/client2
Normally, what start after the webapp/ would be the controllers, actions and so forth from zend.
Therefore the question: is there a quick way in apache to create these virtual subdirectories, as in the example, client1, client2 ?
I guess it should be possible with url rewriting ?
Thanks in advance

Rather than creating virtual directories, this can be solved by creating a specific route with Zend_Route. Assuming, controller as User and the action to pass the name would be view, then
$route = new Zend_Controller_Router_Route(
'webapp/:username',
array(
'controller' => 'user',
'action' => 'view',
'username' => 'defaultuser'
)
);

Related

Datasource not working in JBoss 7.2

When I create a datasource, a service restart is required to make it work, regardless of the method used to create it (standalone.xml, JBoss CLI, JBoss Administration Console). Attached is the procedure I have written for my team (exported from our Wiki space). The datasource gets created successfully, but when I test the connection, I get this:
From JBoss Administration Console
Unknown error
Unexpected HTTP response: 500
Request
{
"address" => [
("subsystem" => "datasources"),
("data-source" => "dsMyApp")
],
"operation" => "test-connection-in-pool"
}
Response
Internal Server Error
{
"outcome" => "failed",
"failure-description" => "JBAS010440: failed to invoke operation: JBAS010442: failed to match pool. Check JndiName: java:/dsMyApp",
"rolled-back" => true,
"response-headers" => {"process-state" => "reload-required"}
}
From JBoss CLI
JBAS010440: failed to invoke operation: JBAS010442: failed to match pool. Check JndiName: java:/dsMyApp
If I restart the JBoss server, the datasource works fine (server, port, username and password are all correct).
Any thoughts?
Thank you
The Quick Answer: YES, restarting makes a reload and then activates the datasource
I suggest you doing a reload with jboss-cli (It´s the quickest way)
I´ve created all my datasources with jboss-cli and I always need to
perform this action to allow them to work. After the reload, the datasource connection can be tested.
/opt/wildfly/bin/jboss-cli.sh --connect --controller=192.168.119.116:9990 --commands="reload --host=master"
Hope it helps

How can I replace the server in Web Component Tester

I have a project set up based around the Polymer Starter Kit, which includes Web-Component-Tester
This project includes php server code which I would also like to test by writing tests to run in the browser which will utilise the PHP server code through Ajax Calls.
This implies replacing the server that Web Component Tester is using ONLY when testing server side code. I hope to make a separate gulp task for this.
Unfortunately, I don't understand the relationship between WCT, Selenium and what ever server is run currently. I can see that WCT command starts Selenium, but I can't find out what the web server is and how that is started. I suspect it is WCT, because there is configuration of the mapping of directories to urls, but other than that I haven't a clue, despite trying to read the code.
Can someone explain how I go about making it run its own server when testing the client, but relying on an already set up web server (nginx) when running the server. I can set nginx to run from local host, or an other domain if that is a way to choose a different configuration.
EDIT: I have now found that runner/webserver.js starts an express server, and that urls get mapped so the base directory for the test runner and the bower_components directory both get mapped to the /components url.
What is currently confusing me is in what circumstances this gets run. It appears that loading plugins somehow does it, but my understanding from reading the code for this is tenuous.
The answer is that web component tester itself has a comment in the runner/config.js file.
In wct-conf.js, you can use registerHooks key into the Object that gets returned to add a function that does
registerHooks: function(wct) {
wct.hook('prepare:webserver', function(app, done) {
var proxy = require('express-http-proxy');
app.use('/api',
proxy('pas.dev', {
forwardPath: function(req, res) {
return require('url').parse(req.url).path;
}
})
);
done();
});
This register hook function allows you to provide a route (/api in my case) which this proxies to a server which can run the php scripts.

Pydio Amazon S3 custom server

I trying to integrate S3fs into Pydio to use my own storage servers (so not amazon).
Accessing an s3fs mount as local filesystem from Pydio is malfunctioning, there are bunch of commands like ls which doesnt work on it therefore I must use the aws-sdk to interface with it from pydio.
The problem is that from the Amazon SDK it's only possible to select Amazons own servers through a region drop down list. To complicate things I also need to use proxy to access my own s3 storage.
Did anyone manage to implement this?
Using just the amazon Sdk how would this look like from php?
What I tried:
<?php
require_once("/usr/share/pydio/plugins/access.s3/aS3StreamWrapper/lib/wrapper/aS3StreamWrapper.class.php");
use Aws\S3\S3Client;
if (!in_array("s3", stream_get_wrappers())) {
$wrapper = new aS3StreamWrapper();
$wrapper->register(array('protocol' => 's3',
'http' => array(
'proxy' => 'proxy://10.0.0.1:80',
'request_fulluri' => true,
),
'acl' => AmazonS3::ACL_OWNER_FULL_CONTROL,
'key' => "<key>",
'secretKey' => "<secret>",
'region' => "s3.myprivatecloud.lan"));
}
?>
Thanks
if this is still a pending question, FYI in the latest versions (v6 beta 2) we've changed the access.s3 plugin to use the last version of aws-sdk, and also we added some parameters to easily use this plugin pointing to alternative s3-compatible storages.
-c

Setting host/remote_addr and other env properties in Rails 3 controller tests

In Rails 2, you could specify the host and other Rack env properties in controller tests like so:
should "spoof host and remote_addr" do
get "/thing/2", {}, :remote_addr => "192.71.1.2", :host => "somewhere.else"
end
However, for some reason this is not working out on Rails 3. I tried with a regular controller, and env["HTTP_HOST"] isn't being set as expected (same with "REMOTE_ADDR"). I also tried this:
should "use host and remote_addr" do
request.env["REMOTE_ADDR"] = "192.71.1.2"
request.env["HTTP_HOST"] = "git.gittit.it"
get "/thing/1"
end
This also used to work in Rails 2, but no longer in Rails 3. As a final test, I tried this with a route that resolved to a bare Rack app, same results.
How can I spoof the host and IP address in a Rails 3 controller test?
Depending on how you're accessing it in the controller... this has worked well for me:
request.stub!(:remote_ip).and_return('192.71.1.2')
At which point when I use request.remote_ip in my controller, I get 192.71.1.2

FastCGI authorizer support in lighttpd broken?

I'm in the process of writing a webapp in C++ using FastCGI with lighttpd. The reason I'm doing this the painful way is because the end product will go on an embedded device. Originally, I didn't know about FCGI modes; I thought everything was basically a responder. Then I learned about authorizers, and I've been trying to enable support for it.
Lighttpd seems to have no trouble putting an authorizer in front of static content, but when I try to protect another FCGI script it gives me 403 forbidden.
I've done a lot of research, and come to some conclusions:
Lighttpd's support for the "Variable-VAR_NAME: value" passing from authorizer to subsequent FCGIs is broken.
The language in the first link implies that you can protect dynamic content with authorizers, but this bug report says otherwise.
For the record, I'm using lighttpd 1.4.28 (x86 and ARM) and custom authentication (password hashed on client with SHA-512), because (1) TLS is impossible/unnecessary for this application, (2) basic HTTP authentication is not good enough, (3) digest authentication is broken in lighttpd, and (4) this isn't really intended to be a secure system anyway.
Here's the relevant part of my lighttpd.conf file:
fastcgi.server = (
"main.fcgi" =>
(( "mode" => "responder",
"bin-path" => "/var/fcgi/main.fcgi",
"socket" => "/tmp/fcgi.sock",
"check-local" => "disable",
"max-procs" => 1
)),
"/" =>
(( "mode" => "authorizer",
"bin-path" => "/var/fcgi/auth.fcgi",
"socket" => "/tmp/fcgi.sock",
"check-local" => "disable",
"max-procs" => 1,
"docroot" => "/var/fcgi"
))
)
To wrap it up, can anyone give me guidance on using an FCGI authorizer to control access to other FCGI scripts(/binaries), instead of just static files, on lighttpd? It would also be nice to get variable-passing working. Thanks for reading this far!
Everything I've seen seems to indicate that FastCGI authorizers do not work accroding to spec in lighttpd. What I've done is implemented my own authorization scheme inside my normal responder code. This is fine for my purposes, but more complex websites may really feel the pain from this one. :( If anyone comes up with a better answer for this, respond and I'll eventually get around to changing the answer to yours.
Update: lighttpd fixed this in lighttpd 1.4.42, released back in 2016.
https://redmine.lighttpd.net/issues/321