How do I tell the dojo build system to run the shrinksafe optimization on files NOT included in a layer but in the perfixes directories?
Thanks
There are two optimization parameters for custom builds: optimize and layerOptimize. In your case, you would need to set optimize=shrinksafe.
optimize Specifies how to optimize module files. If "comments"
is specified, then code comments are
stripped. If "shrinksafe" is
specified, then the Dojo compressor
will be used on the files, and line
returns will be removed. If
"shrinksafe.keepLines" is specified,
then the Dojo compressor will be used
on the files, and line returns will
be preserved. If "packer" is
specified, Then Dean Edwards' Packer
will be used Default: "",
layerOptimize Specifies how to optimize the layer files. If
"comments" is specified, then code
comments are stripped. If
"shrinksafe" is specified, then the
Dojo compressor will be used on the
files, and line returns will be
removed. If "shrinksafe.keepLines" is
specified, then the Dojo compressor
will be used on the layer files, and
line returns will be preserved. If
"packer" is specified, Then Dean Edwards' Packer will be used Default: "shrinksafe",
You need to declare the folder that contains the files to optimise as a package in the build profile so that it can be optimised.
packages: [
{
name:"dojo",
location:"dojo"
},
{
name:"filesToOptimise",
location:"folderLocation"
}
]
Make sure you have a profile.js and package.json in that directory and optimize:"shrinksafe" option in your build profile as well.
Related
Context: I'm trying to come up with a fix for https://github.com/tensorflow/tensorflow/issues/37861 where header files of an external dependency are manually listed but that list is version specific and hence impossible to keep up to date.
What is happening:
tf_http_archive(name = "com_google_protobuf", system_build_file = clean_dep("//third_party/systemlibs:protobuf.BUILD") ...) is invoked
tf_http_archive is a repository_rule with effectively nothing but ctx.template("BUILD.bazel", ctx.attr.system_build_file, {...}, False)
In the protobuf.BUILD there is a list HEADERS = ["google/protobuf/any.pb.h", ...] which is passed to the hdrs argument of cc_library calls
a genrule apperantly symlinks those headers from $(INCLUDEDIR) into $(#D) (I'm not really familiar with Bazel but IIUC the latter is some internal build directory used later)
As I'm unfamiliar with Bazel in general I'll just assume the list of headers is required and there exists a $(INCLUDEDIR)/google/protobuf folder and is somewhere (else) on the system, e.g. /usr/local/include.
Is there any way to get all *.h and *.inc files in the format (i.e. relative to $(INCLUDEDIR)) via a glob or similar? The Bazel glob function doesn't work for absolute paths, so that can't be used.
I found https://github.com/bazelbuild/bazel/issues/8846 suggesting to use new_local_repository with a build_file and a path set to (in this case) $(INCLUDEDIR) but I don't see how that could be applied to the tf_http_archive (which has some conditions to either download the dependency or just use the system_build_file). This seems to also allow to avoid the symlinking (which I'm highly suspicious of anyway because that folder is added via -iquote but include style is #include <...>, see my comments in https://github.com/tensorflow/tensorflow/issues/37861)
Bonus points for people contributing to the issue or ideas why action_env environment variables seem to be ignored in a native.cc_library call.
I have a project whose build options are complicated enough that I have to run several external scripts during the configuration process. If these scripts, or the files that they read, are changed, then configuration needs to be re-run.
Currently the project uses Autotools, and I can express this requirement using the CONFIG_STATUS_DEPENDENCIES variable. I'm experimenting with porting the build process to Meson and I can't find an equivalent. Is there currently an equivalent, or do I need to file a feature request?
For concreteness, a snippet of the meson.build in progress:
pymod = import('python')
python = pymod.find_installation('python3')
svf_script = files('scripts/compute-symver-floor')
svf = run_command(python, svf_script, files('lib'),
host_machine.system())
if svf.returncode() == 0
svf_results = svf.stdout().split('\n')
SYMVER_FLOOR = svf_results[0].strip()
SYMVER_FILE = svf_results[2].strip()
else
error(svf.stderr())
endif
# next line is a fake API expressing the thing I can't figure out how to do
meson.rerun_configuration_if_files_change(svf_script, SYMVER_FILE)
This is what custom_target() is for.
Minimal example
svf_script = files('svf_script.sh')
svf_depends = files('config_data_1', 'config_data_2') # files that svf_script.sh reads
svf = custom_target('svf_config', command: svf_script, depend_files: svf_depends, build_by_default: true, output: 'fake')
This creates a custom target named svf_config. When out of date, it runs the svf_script command. It depends on the files in the svf_depends file object, as well as
all the files listed in the command keyword argument (i.e. the script itself).
You can also specify other targets as dependencies using the depends keyword argument.
output is set to 'fake' to stop meson from complaining about a missing output keyword argument. Make sure that there is a file of the same name in the corresponding build directory to stop the target from always being considered out-of-date. Alternatively, if your configure script(s) generate output files, you could list them in this array.
In Go, filenames have semantic meaning. For example:
*_windows.go // Only include in compilations for Windows
*_unix.go // Only include in compilations for Unix
*_386.go // Only include in compilations for 386 systems
*_test.go // Only include when run with `go test`
However, I can't seem to get the following to work:
*_windows_test.go // Only include when running `go test` on windows
*_test_windows.go // Another attempt
Is this even possible with Go? If so, how?
Just use a build constraint on the test files.
// +build windows
A build constraint is evaluated as the OR of space-separated options; each option evaluates as the AND of its comma-separated terms; and each term is an alphanumeric word or, preceded by !, its negation. That is, the build constraint:
Turns out I was wrong. It does work. The issue was that I also had a file called foo_unix_test.go, and apparently Go doesn't support the *_unix.go syntax as a special case.
I wanted to generate scaffold without stylesheets, and I found these two flags: --skip-stylesheets, --no-stylesheets. What's the difference between them?
If you run rails g scaffold --help, it will show help information for that generator along with a list of options.
Some of the options have default values. For example, if you look at
-y, [--stylesheets] # Generate Stylesheets
# Default: true
You see it defaults to true. If you don't want to generate stylesheets, you can prefix the option with --no to disable that specific option.
The skip-stylesheets option is defined in the [Runtime options] section as follows:
-s, [--skip] # Skip files that already exist
So to answer your question:
--no-stylesheets doesn't generate stylesheets at all
--skip-stylesheets generates stylesheets but skips the ones that already exist.
I am new to stackoverflow, but I got a lot of help until now, thanks to the community for that.
I'm trying to create a software showing me caller depandencys for legacycode.
I'parsing a directory with c code with pycparcer, and for each file i want to create a subgraph with pydot.
Two questions:
When parsing a c file, the parser references the #includes, an i get also functions in my AST, from the included files. How can i know, if the function is included, or originaly from this actual file/ or ignore the #includes??
For each file i want to create a subgraph, an then add all functions in this file to this subgraph. I don't know how many subgraphs i have to create...
I have a set of files, where each file is a frozenset with the functions of this file
somthing like this is pssible?
for files in SetOfFiles:
#how to create subgraph with name of files?
for function in files:
self.graph.add_node(pydot.Node(funktion)) #--> add node to subgraph "files"
I hope you got my challange... any ideas?
Thanks!
EDIT:
I solved the question about pydot, it was quiet easy... So I stay with my pycparser problem :(
for files in ListOfFuncs:
cluster_x = pydot.Cluster(files, label=files)
for functions in files:
cluster_x.add_node(pydot.Node(functions))
graph.add_subgraph(cluster_x)
I can address the pycparser part. The preprocessor leaves #line directives that specify which file & line code came for, and pycparser consumes those. You can get that information from the AST it creates (see tests for an example).