Dijit.tree - Listing a directory on webserver - dojo

I would like to know if there is an easy way to list all the files in a directory on a webserver using Dojo's dijit.tree.
I suppose I could populate a datastore of the files using PHP, but that seems like a major pain and something that could be done much easier, I just can't think of anything else.
Any ideas?

Youll need to find a way to get the data from the server. Dojo can only handle things client-side

Check out "dojox/data/demos/demo_FileStore_dojotree.html", that's one specific way requiring php support on your server

Related

What is the significance of data-config.xml file in Solr?

and when shall I use it? How is it configured can anyone please tell me in detail?
The data-config.xml file is an example configuration file for how to use the DataImportHandler in Solr. It's one way of getting data into Solr, allowing one of the servers to connect through JDBC (or through a few other plugins) to a database server or a set of files and import them into Solr.
DIH has a few issues (for example the non-distributed way it works), so it's usually suggested to write the indexing code yourself (and POST it to Solr from a suitable client, such as SolrJ, Solarium, SolrClient, MySolr, etc.)
It has been mentioned that the DIH functionality really should be moved into a separate application, but that hasn't happened yet as far as I know.

Possibilities of Datazen server migration

I know that similar topics have been already raised, but maybe there are some latest news or ideas?
I want to migrate Datazen users/sources/dashboards etc. to another server (production one) in a smooth way. I was trying to do that via backup/restore, but then I couldn't access the control panel on the target server. I received an error
401 unauthorized access.
Maybe I should change something in logs/config files on the destination server?
Any ideas? I would be grateful for any help!
I dont think there is a way to do this out of the box. However, the files are quite simple XML, so can be pointed at a different server if you know PowerShell (and work out the correct values from the server).
You will have to re-point the GUID, ServerGUID and ServerURI within the sources.xml file and then rezip (as .datazen). Providing you have your hubs set up the same, Datazen will then believe the file belongs to your prod environment and you will be able to publish

Can ExpressJS - be told to load partial view from redis or mongodb?

I would like to have the templates to be stored in say redis or mongodb.
Is there a way to tell express to load partials/templates from a database ?
Edit
Looking at the code on https://github.com/visionmedia/express/blob/master/lib/view.js I guess you'd have to write your own View...correct ?
Is there a way to tell express to load partials/templates from a database ?
As in, does express have a feature to just do this with an extra configuration option or something along those lines. No. Definitely not.
As in, is it possible, yes, it is. You'll have to write the code to lookup your templates, render them to HTML, and send them in the response though.

Controlling access to large files in Apache

I am looking to control access to some large files (we're talking many GB here) by the use of signed URLs. The files are currently restricted by LDAP Basic authentication (mod_auth_ldap), but I need to change this to verify the signature (passed as a query parameter in the URL).
Basically, I just need to run a script to verify the signature, and allow the request to proceed as if authentication had succeeded. My initial thought to this was just to use a simple CGI script, but as the files are so large I'm concerned about performance. So, really, this question is (probably) more like "are there any performance implications of streaming large files from a CGI script via Apache?"… and if so, "is there a better way of doing this (short of writing a dedicated authentication module)?"
If this makes any sense, help would be much appreciated :)
P.S. I wasn't sure exactly what to search for for this (10 minutes of Googling were fruitless), so I may very well be duplicating someone else's post.
Have a look at the crypto cookies/sessions in apache - one way to do this is to put a must-have-valid session limit on that directory - forward anyone who does not have a valid one to a cgi-script; auth there - and then forward back to the actual download.
That way apache can use its normal sendfile() and other optimizations.
However keep in mind that a shell script or perl script ending with a simple 'execvp', 'exec cat' or something like that is not that expensive.
An alternative is more uRL based - like http://authmemcookie.sourceforge.net/.
Dw.
I ended up solving this with a CGI script as mentioned… cookies weren't an option because we need to be able to support clients that don't use cookies (apt).

How to compare test website and live website

We have our production server running our website. Then we have a test server which has exact same data but with changes to code to do some new functionality. This web app has over 500 pages.
Is there any program that can
Login to the test site
Crawl through each page and then save the page as html
Compare with the same page saved with live site?
This way we can make sure that new features that we add to our test site will not break the live site when code updates are applied to production.
I am currently trying to use WinHTTrack website copier and then comparing the test and live folders with some code comparison tool like beyond compare. This works ok but there are lot of files changed because of the domain name changes.
Looking forward to ideas / solutions for this problem.
Regards
Have you looked at using Watir for this? It's not exactly the thing you are looking for but it might allow you some more granularity in your tests and ensure the site is functionally identical rather than getting caught up on changing guids, timestamps and all the other things that tend to change across any significant size website from day to day as part of it's standard functionality.
Apparently you can't make consistent, reproduceable builds in your project, can you? I would recommend moving towards that in the long run, it will save you a lot of headaches. That way you would know exactly what was deployed to which server when, so there would be no more need to bend around backwards to get the deployed sources back like this...
I know this is not a direct solution to your problem... but maybe it is worth comparing, whether you would save more in the long run by investing the efforts into your build process now, instead of implementing this workaround (and then improving your build process anyway - because one day you will almost surely need to do that).
wget has a --convert-links option, there are also some options to preserve cookies that might let you do it logged in http://drupal.org/node/118759#comment-664498
use an Offline Downloader, download all files to your computer from both sources, then compare the folder contents using a free tool like Total Commander.
EDIT
Load both of your sources into a CVS, and compare it there.