How can one include another LiveScript file in LiveScript? - livescript

How can one use code in a LiveScript file from another LS file? For example:
# In script-one.ls
foo = 5
# In script-two.ls
bar = -> foo + 3
Simply including both files in the HTML via script tags does not seem to work. Changing the first script to export foo = 5 and using require! './script-one' (or variants) in the second script doesn't work either.
And what about circular dependencies?

LiveScript simply compiles to javascript. The module format is your decision just like in JS.
The export keyword simply compiles to a commonjs exports.foo = right now and will not work in browsers without using something like browserify (http://browserify.org/) to bundle your modules (ES6 compat is planned in the future).

Related

how to customize nix package builder script

The root problem is that nix uses autoconf to build libxml2-2.9.14 instead of cmake, and a consequence of this is that the cmake-configuration is missing (details like version number, platform specific dependencies like ws2_32 etc which are needed by my project cmake scripts). libxml2-2.9.14 already comes with cmake configuration and works nicely, except that nix does not use it (I guess they have their own reasons).
Therefore I would like to reuse the libxml2-2.9.14 nix package and override the builder script with my own (which is a trivial cmake dance).
Here is my attempt:
defaultPackage = forAllSystems (system:
let
pkgs = nixpkgsFor.${system};
cmakeLibxml = pkgs.libxml2.overrideAttrs( o: rec {
PROJECT_ROOT = builtins.getEnv "PWD";
builder = "${PROJECT_ROOT}/nix-libxml2-builder.sh";
});
in
Where nix-libxml2-builder.sh is my script calling cmake with all the options I need. It fails like this:
last 1 log lines:
> bash: /nix-libxml2-builder.sh: No such file or directory
For full logs, run 'nix log /nix/store/andvld0jy9zxrscxyk96psal631awp01-libxml2-2.9.14.drv'.
As you can see the issue is that PROJECT_ROOT does not get set (ignored) and I do not know how to feed my builder script.
What am I doing wrong?
Guessing from the use of defaultPackage in your snippet, you use flakes. Flakes are evaluated in pure evaluation mode, which means there is no way to influence the build from outside. Hence, getEnv always returns an empty string (unfortunately, this is not properly documented).
There is no need to refer to the builder script via $PWD. The whole flake is copied to the nix store so you can use your files directly. For example:
builder = ./nix-libxml2-builder.sh;
That said, the build will probably still fail, because cmake will not be available in the build environment. You would have to override nativeBuildInputs attribute to add cmake there.

Meson equivalent of automake's CONFIG_STATUS_DEPENDENCIES?

I have a project whose build options are complicated enough that I have to run several external scripts during the configuration process. If these scripts, or the files that they read, are changed, then configuration needs to be re-run.
Currently the project uses Autotools, and I can express this requirement using the CONFIG_STATUS_DEPENDENCIES variable. I'm experimenting with porting the build process to Meson and I can't find an equivalent. Is there currently an equivalent, or do I need to file a feature request?
For concreteness, a snippet of the meson.build in progress:
pymod = import('python')
python = pymod.find_installation('python3')
svf_script = files('scripts/compute-symver-floor')
svf = run_command(python, svf_script, files('lib'),
host_machine.system())
if svf.returncode() == 0
svf_results = svf.stdout().split('\n')
SYMVER_FLOOR = svf_results[0].strip()
SYMVER_FILE = svf_results[2].strip()
else
error(svf.stderr())
endif
# next line is a fake API expressing the thing I can't figure out how to do
meson.rerun_configuration_if_files_change(svf_script, SYMVER_FILE)
This is what custom_target() is for.
Minimal example
svf_script = files('svf_script.sh')
svf_depends = files('config_data_1', 'config_data_2') # files that svf_script.sh reads
svf = custom_target('svf_config', command: svf_script, depend_files: svf_depends, build_by_default: true, output: 'fake')
This creates a custom target named svf_config. When out of date, it runs the svf_script command. It depends on the files in the svf_depends file object, as well as
all the files listed in the command keyword argument (i.e. the script itself).
You can also specify other targets as dependencies using the depends keyword argument.
output is set to 'fake' to stop meson from complaining about a missing output keyword argument. Make sure that there is a file of the same name in the corresponding build directory to stop the target from always being considered out-of-date. Alternatively, if your configure script(s) generate output files, you could list them in this array.

rpm spec file skeleton to real spec file

The aim is to have skeleton spec fun.spec.skel file which contains placeholders for Version, Release and that kind of things.
For the sake of simplicity I try to make a build target which updates those variables such that I transform the fun.spec.skel to fun.spec which I can then commit in my github repo. This is done such that rpmbuild -ta fun.tar does work nicely and no manual modifications of fun.spec.skel are required (people tend to forget to bump the version in the spec file, but not in the buildsystem).
Assuming the implied question is "How would I do this?", the common answer is to put placeholders in the file like ##VERSION## and then sed the file, or get more complicated and have autotools do it.
We place a version.mk file in our project directories which define environment variables. Sample content includes:
RELPKG=foopackage
RELFULLVERS=1.0.0
As part of a script which builds the RPM, we can source this file:
#!/bin/bash
. $(pwd)/Version.mk
export RELPKG RELFULLVERS
if [ -z "${RELPKG}" ]; then exit 1; fi
if [ -z "${RELFULLVERS}" ]; then exit 1; fi
This leaves us a couple of options to access the values which were set:
We can define macros on the rpmbuild command line:
% rpmbuild -ba --define "relpkg ${RELPKG}" --define "relfullvers ${RELFULLVERS}" foopackage.spec
We can access the environment variables using %{getenv:...} in the spec file itself (though this can be harder to deal with errors...):
%define relpkg %{getenv:RELPKG}
%define relfullvers %{getenv:RELFULLVERS}
From here, you simply use the macros in your spec file:
Name: %{relpkg}
Version: %{relfullvers}
We have similar values (provided by environment variables enabled through Jenkins) which provide the build number which plugs into the "Release" tag.
I found two ways:
a) use something like
Version: %(./waf version)
where version is a custom waf target
def version_fun(ctx):
print(VERSION)
class version(Context):
"""Printout the version and only the version"""
cmd = 'version'
fun = 'version_fun'
this checks the version at rpm build time
b) create a target that modifies the specfile itself
from waflib.Context import Context
import re
def bumprpmver_fun(ctx):
spec = ctx.path.find_node('oregano.spec')
data = None
with open(spec.abspath()) as f:
data = f.read()
if data:
data = (re.sub(r'^(\s*Version\s*:\s*)[\w.]+\s*', r'\1 {0}\n'.format(VERSION), data, flags=re.MULTILINE))
with open(spec.abspath(),'w') as f:
f.write(data)
else:
logs.warn("Didn't find that spec file: '{0}'".format(spec.abspath()))
class bumprpmver(Context):
"""Bump version"""
cmd = 'bumprpmver'
fun = 'bumprpmver_fun'
The latter is used in my pet project oregano # github

LESS: generate different CSSs for different theme (color variation)

I would like to manage creation of different "theme" for my site, using LESS.
My idea is to generate different compiled .css files, using each time a specific variable.less that is imported by root file.
Here a simple example:
1) I have 2 different color scheme in 2 distinct files: variable1.less and variable2.less.
2) A file style.less should have an #import rule like "#import variableX.less" and obviously this 'X' should change assuming values '1' and '2'.
3) Compiler should then generate style1.css and style2.css, each based on relative variable1.less and variable2.less.
How to obtain this?
You need to flip your import directions.
The style.less file should not import any variables.
Instead, each variableN.less file should import style.less after defining all of its variables.
These files will then each compile to a full set of rules based on their variable values.

How to organize Lua module path and write "require" calls without losing flexibility?

Say I have a project, whose folder structure looks like below:
| main.lua
|
|---<model> // this is a folder
| |a.lua
| |b.lua
|
|---<view>
|a.lua
|b.lua
model/a.lua requries model/b.lua: require "b"
view/a.lua requries view/b.lua: require "b"
main.lua requries files in model and view.
Now I have problem to get these modules loaded correctly. I know I can fix it by changing the require calls to:
model/a.lua: require "model.b"
view/a.lua: require "view.b"
But if I do that, I have to modify these files every time when I change the folder structure.
So my questions are:
How to fix the module path issue without hard code paths in module files?
Why Lua doesn't use the module search rule of Node.js, which looks easier?
When you require a module, the string parameter from require gets passed into the module which you can access using the variable-argument syntax .... You can use this to include other dependent modules which reside in the same path as the current module being requireed without making it dependent on a fixed hard-coded module name.
For your example, instead of doing:
-- model/a.lua
require "model.b"
and
-- view/a.lua
require "view.b"
You can do:
-- model/a.lua
local thispath = select('1', ...):match(".+%.") or ""
require(thispath.."b")
and
-- view/a.lua
local thispath = select('1', ...):match(".+%.") or ""
require(thispath.."b")
Now if you change directory structure, eg. move view to something like control/subcontrol/foobar, then control/subcontrol/foobar/a.lua (formerly view/a.lua) will now try to require control/subcontrol/foobar/b.lua instead and "do the right thing".
Of course main.lua will still need to fully qualify the paths since you need some way to disambiguate between model/a.lua and view/a.lua.
How to fix the module path issue without hard code paths in module files?
I don't have any better cross-platform solution, maybe you should plan the folder structure early on.
Why Lua doesn't use the module search rule of Node.js, which looks easier?
Because Lua tries its best to rely only on ANSI C, which is really successful. And in ANSI C, there's no such concept of directories.
There are a couple approaches you can use.
You can add relative paths to package.path as in this SO answer. In your case you'd want to add paths in main.lua that correspond to the various ways you might access the files. This keeps all the changes required when changing your directory structure local to one file.
You can add absolute paths to package.pathusing debug.getinfo -- this may be a little easier since you don't need to account for all the relative accesses, but you still need to do this in main.lua when changing your directory structure, and you need to do string manipulation on the value returned by debug.getinfo to strip the module name and add the subdirectory names.
> lunit = require "lunit"
> info = debug.getinfo(lunit.run, "S")
> =info.source
#/usr/local/share/lua/5.2/lunit.lua
> =info.short_src
/usr/local/share/lua/5.2/lunit.lua
The solution is to add the folder of main.lua (project root) to package.path in main.lua.
A naive way to support folders of 1 level deep:
-- main.lua
package.path = package.path .. ";../?.lua"
Note for requires in (project root) will look up files outside of project root, which is not desirable.
A better way of to use some library (e.g.: paths, penlight) to resolve the absolute path and add it instead:
-- main.lua
local projectRoot = lib.abspath(".")
package.path = package.path .. ";" .. projectRoot .. "/?.lua"
Then in you source use the folder name to scope the files:
-- model/a.lua
require "model.b"
-- you can even do this
require "view.b"
and
-- view/a.lua
require "view.b"