How to organize Lua module path and write "require" calls without losing flexibility? - module

Say I have a project, whose folder structure looks like below:
| main.lua
|
|---<model> // this is a folder
| |a.lua
| |b.lua
|
|---<view>
|a.lua
|b.lua
model/a.lua requries model/b.lua: require "b"
view/a.lua requries view/b.lua: require "b"
main.lua requries files in model and view.
Now I have problem to get these modules loaded correctly. I know I can fix it by changing the require calls to:
model/a.lua: require "model.b"
view/a.lua: require "view.b"
But if I do that, I have to modify these files every time when I change the folder structure.
So my questions are:
How to fix the module path issue without hard code paths in module files?
Why Lua doesn't use the module search rule of Node.js, which looks easier?

When you require a module, the string parameter from require gets passed into the module which you can access using the variable-argument syntax .... You can use this to include other dependent modules which reside in the same path as the current module being requireed without making it dependent on a fixed hard-coded module name.
For your example, instead of doing:
-- model/a.lua
require "model.b"
and
-- view/a.lua
require "view.b"
You can do:
-- model/a.lua
local thispath = select('1', ...):match(".+%.") or ""
require(thispath.."b")
and
-- view/a.lua
local thispath = select('1', ...):match(".+%.") or ""
require(thispath.."b")
Now if you change directory structure, eg. move view to something like control/subcontrol/foobar, then control/subcontrol/foobar/a.lua (formerly view/a.lua) will now try to require control/subcontrol/foobar/b.lua instead and "do the right thing".
Of course main.lua will still need to fully qualify the paths since you need some way to disambiguate between model/a.lua and view/a.lua.

How to fix the module path issue without hard code paths in module files?
I don't have any better cross-platform solution, maybe you should plan the folder structure early on.
Why Lua doesn't use the module search rule of Node.js, which looks easier?
Because Lua tries its best to rely only on ANSI C, which is really successful. And in ANSI C, there's no such concept of directories.

There are a couple approaches you can use.
You can add relative paths to package.path as in this SO answer. In your case you'd want to add paths in main.lua that correspond to the various ways you might access the files. This keeps all the changes required when changing your directory structure local to one file.
You can add absolute paths to package.pathusing debug.getinfo -- this may be a little easier since you don't need to account for all the relative accesses, but you still need to do this in main.lua when changing your directory structure, and you need to do string manipulation on the value returned by debug.getinfo to strip the module name and add the subdirectory names.
> lunit = require "lunit"
> info = debug.getinfo(lunit.run, "S")
> =info.source
#/usr/local/share/lua/5.2/lunit.lua
> =info.short_src
/usr/local/share/lua/5.2/lunit.lua

The solution is to add the folder of main.lua (project root) to package.path in main.lua.
A naive way to support folders of 1 level deep:
-- main.lua
package.path = package.path .. ";../?.lua"
Note for requires in (project root) will look up files outside of project root, which is not desirable.
A better way of to use some library (e.g.: paths, penlight) to resolve the absolute path and add it instead:
-- main.lua
local projectRoot = lib.abspath(".")
package.path = package.path .. ";" .. projectRoot .. "/?.lua"
Then in you source use the folder name to scope the files:
-- model/a.lua
require "model.b"
-- you can even do this
require "view.b"
and
-- view/a.lua
require "view.b"

Related

Make CMake variable set in subproject (variable nesting) available to all other subprojects

If I have a similar to the following structure project tree (each subproject is added via add_subdirectory() to the parent project):
CMakeLists.txt (TOP-LEVEL)
|
+ -- subproj1
| |
| + -- CMakeLists.txt (subproj1)
| |
| + -- subsubproj1
| |
| + -- CMakeLists.txt (subsubproj1)
|
+ -- subproj2
|
+ -- CMakeLists.txt (subproj2)
I want to expose a variable set inside subsubproj1 in subproj2. The nature of this variable is irrelevant but in my case it points at ${CMAKE_CURRENT_SOURCE_DIR}/include that is the include directory of subsubproj1, which (in my case) is a library used by subproj2. Currently I am re-assigning the variable in each intermediate CMakeLists.txt between the one (here subproj1) where the variable was assigned a value to the top-level with PARENT_SCOPE enabled:
CMakeLists.txt (subsubproj1)
# Expose MY_VAR to subproj1
set(MY_VAR
"Hello"
PARENT_SCOPE
)
CMakeLists.txt (subproj1)
# Expose MY_VAR to top level project thus making it visible to all
set(MY_VAR
${MY_VAR}
PARENT_SCOPE
)
This can be applied to an arbitrary nested project tree.
My question is what is the common practice of doing what I have described above? I can declare MY_VAR as a top-level variable to begin with but what if for some reason I don't want to make it visible (as in written text) there. In which case is PARENT_SCOPE no longer an option and should be replaced with just a straight declaration of that variable in the top-level CMakeLists.txt?
Targets
The nature of this variable is irrelevant but in my case it points at ${CMAKE_CURRENT_SOURCE_DIR}/include that is the include directory of subsubproj1, which (in my case) is a library used by subproj2.
No, the nature is not irrelevant.
Using variables to communicate include directories in CMake is a horrible anti-pattern from the 2.6.x days. You should not use a hammer to drive in a screw.
Non-IMPORTED projects are always global, so you can link to them safely. In subsubproj1 you would write:
add_library(myproj_subsubproj1 INTERFACE)
add_library(myproj::subsubproj1 ALIAS myproj_subsubproj1)
target_include_directories(
myproj_subsubproj1
INTERFACE
"$<BUILD_INTERFACE:${CMAKE_CURRENT_SOURCE_DIR}/include>"
)
Then in subproj2, you would write:
target_link_libraries(myproj_subproj2 PRIVATE myproj::subsubproj1)
Worse options
The following options are worse because they forego the declarative parts of the CMake language and make your scripts dependent on subproject inclusion order. This is a significant complexity increase that (in my experience) is not warranted in build code.
Nevertheless, here are the imperative tools CMake provides:
1. Using a CACHE variable
The cache is a disk-persisted store for global variables. Setting one in any directory makes the value visible to all directories.
Note that there are a few potential drawbacks to this:
Prior to CMake 3.21, creating a new cache entry would delete a normal variable of the same name, leading to tricky situations where builds could become non-idempotent (bad!). See https://cmake.org/cmake/help/latest/policy/CMP0126.html
The user can overwrite your cache variables at the command line, so you cannot rely on either the defined-ness or the value of the variable when your CMake program starts running.
If you can live with this, then you can write:
# On CMake <3.21, honor normal variables. Can remove
# check if on CMake >=3.21
if (NOT DEFINED MyProj_INADVISABLE_VARIABLE)
set(MyProj_INADVISABLE_VARIABLE
"${CMAKE_CURRENT_SOURCE_DIR}/include>"
CACHE STRING "Doc string...")
# If you want to hint to your users that they should
# not edit this variable, include the following line:
mark_as_advanced(MyProj_WEIRD_VARIABLE)
endif ()
If you do not want to allow your users to override it, then you may consistently use an INTERNAL cache variable:
set(MyProj_INADVISABLE_VARIABLE "..." CACHE INTERNAL "...")
As long as you initialize it to a known value early on, then this will work okay as a global variable, but might incur disk traffic on writes.
2. Directory property
A slightly better approach is to use a custom directory property to communicate a value. In subsubproj1:
set_property(DIRECTORY "." PROPERTY inadvisable_prop "foo")
Then in subproj2:
get_property(value DIRECTORY "../subproj1/subsubproj1"
PROPERTY inadvisable_prop)
Note that it is not an error to get a non-existent property, so be on the lookout for types.
You could also use a GLOBAL property instead of a directory property, but global variables in general are a headache waiting to happen. You might as well set it on the directory to decrease the chances of unintended scoping bugs.

How to specify multiple Working directory in set_tests_properties in Ctest

I have a large supporting file placed in another location outside my test Working directory.
Working Directory = C:/cmake/src/Test
Large Binary file to be parsed = C:/Largefiles/Binary.fft
I dont want to copy Binary.fft to C:/cmake/src/Test
set(working_dir "C:/cmake/src/Test")
set(Large_Binary "C:/Largefiles/")
Is it possible to do specify two working directories for a test like this?
set_tests_properties(Test_Large PROPERTIES ENVIRONMENT PATH=${BIN_DIR} WORKING_DIRECTORY ${working_dir }${Large_Binary })
Or is there a better way to approach this situation?

How to access two different routines in two files in Trace32 CMM scripts

I have two files in two different floder locations in Trace32. I execute cd.do file_name subroutine_name in Trace32. The trace32 takes the location of first command executed as the folder from which the following commands needs to be executed. How can I execute the routines from two different folders.
There is a pretty good guide here on how to script in Trace32.
http://www2.lauterbach.com/pdf/practice_user.pdf
I do not understand why you need to have them in two different folders, shouldn't it be solved by just have it in the same folder?
Well, maybe you should simply use DO <myscript.cmm> instead of CD.DO <myscript.cmm>.
DO <myscript.cmm> executes the script at the given location but keeps the current working path.
CD.DO <myscript.cmm> changes the working path to the location of the given script and then executes the script.
However I would recommend to write your scripts in a way that it doesn't matter if they are called with CD.DO or just DO. You can achieve that with either absolute paths or with paths relative to the script locations. (I prefer the 2nd one.)
So imagine the following file structure:
C:\t32\myscripts\start.cmm
C:\t32\myscripts\folder1\routines.cmm
C:\t32\myscripts\folder2\loadapp.cmm
C:\t32\myscripts\folder2\application.elf
You can cope this structure with absolute paths like that:
start.cmm:
DO "C:/t32/myscripts/folder1/routines.cmm" subroutine_A
DO "C:/t32/myscripts/folder2/loadapp.cmm"
folder2/loadapp.cmm:
Data.LOAD.Elf "C:/t32/myscripts/folder2/application.elf"
DO "C:/t32/myscripts/folder1/routines.cmm" subroutine_B
With relative paths you could use the prefix "~~~~" before accessing other files relative from the location of the currently executed PRACTICE script. The "~~~~" is replaced with the path of the currently executed script (just like "~" stands for your home directory.) There is also a function OS.PPD() which gives you the directory of the currently executed PRACTICE script.
So above situation with relative paths look like that:
start.cmm:
DO "~~~~/folder1/routines.cmm subroutine_A"
DO "~~~~/folder2/loadapp.cmm"
folder2/loadapp.cmm:
Data.LOAD.Elf "~~~~/application.elf"
DO "~~~~/../folder1/routines.cmm" subroutine_B

Matlab can't find member functions when directory changes. What can I do?

I have a Matlab program that does something like this
cd media;
for i =1:files
d(i).r = %some matlab file read command
d(i).process();
end
cd ..;
When I change to my "media" directory I can still access member properties (such as 'r'), but Matlab can't seem to find functions like process(). How is this problem solved? Is there some kind of global function pointer I can call? My current solution is to do 2 loops, but this is somewhat deeply chagrining.
There are two solutions:
don't change directories - instead give the file path the your file read command, e.g.
d(i).r = load(['media' filesep 'yourfilename.mat']);
or
add the directory containing your process() to the MATLAB path:
addpath('C:\YourObjectsFolder');
As mentioned by tdc, you can use
addpath(genpath('C:\YourObjectsFolder'));
if you also want to add all subdirectories to your path.
Jonas already mentioned addpath, but I usually use it in combination with genpath:
addpath(genpath('path_to_folder'));
which also adds all of the subdirectories of 'path_to_folder' as well.

Rails 3 : `require` within a generator

I am writing a Rails 3 generator, but things get a bit complicated so I would like to extract some code to put it in a separate file.
So I create a file in the generator folder, and within my generator file, I put at the top:
require 'relative/path/to/my/code.rb'
But when I launch the generator, it tells me that it can't find the file.
activesupport-3.0.0.rc/lib/active_support/dependencies.rb:219:in `require': no such file to load -- relative/path/to/my/code.rb (LoadError)
Does anybody know a work around ?
It depends which Ruby version you are using.
In 1.8, it should work as you do. In 1.9 you should use require_relative.
You should also not add '.rb' at the end, this is not recommended.
The danger with a simple 'require' with a relative path is if this script is itself required by another, then the path will be relative to the first script called:
rootdir
- main.rb
- subdir1
- second.rb
- subdir11
- third.rb
If main.rb is called, and then require second.rb (with 'subdir1/second'), and then you want to require third.rb with 'subdir11/third.rb', it will not work.
You could be relative to the first script (subdir1/subdir11/third.rb), but that is not a good idea.
You could use __FILE__ and then make it an absolute path:
require File.expand_path('../subdir11/third.rb', FILE)
(the first .. is to get in the directory which contains the file) or
require File.dirname(FILE) + '/subdir11/third.rb'
But the most common practice is to reference it from the rootdir.
In a gem, you can assume the rootdir will be in the $LOAD_PATH (or you can add it yourself).
In Rails you can use require "#{RAILS_ROOT}/path" (rails2) or
require Rails.root.join('path') (rails3)