What's the difference between --skip-stylesheets and --no-stylesheets - ruby-on-rails-3

I wanted to generate scaffold without stylesheets, and I found these two flags: --skip-stylesheets, --no-stylesheets. What's the difference between them?

If you run rails g scaffold --help, it will show help information for that generator along with a list of options.
Some of the options have default values. For example, if you look at
-y, [--stylesheets] # Generate Stylesheets
# Default: true
You see it defaults to true. If you don't want to generate stylesheets, you can prefix the option with --no to disable that specific option.
The skip-stylesheets option is defined in the [Runtime options] section as follows:
-s, [--skip] # Skip files that already exist
So to answer your question:
--no-stylesheets doesn't generate stylesheets at all
--skip-stylesheets generates stylesheets but skips the ones that already exist.

Related

How to glob an absolute path for files in Bazel

Context: I'm trying to come up with a fix for https://github.com/tensorflow/tensorflow/issues/37861 where header files of an external dependency are manually listed but that list is version specific and hence impossible to keep up to date.
What is happening:
tf_http_archive(name = "com_google_protobuf", system_build_file = clean_dep("//third_party/systemlibs:protobuf.BUILD") ...) is invoked
tf_http_archive is a repository_rule with effectively nothing but ctx.template("BUILD.bazel", ctx.attr.system_build_file, {...}, False)
In the protobuf.BUILD there is a list HEADERS = ["google/protobuf/any.pb.h", ...] which is passed to the hdrs argument of cc_library calls
a genrule apperantly symlinks those headers from $(INCLUDEDIR) into $(#D) (I'm not really familiar with Bazel but IIUC the latter is some internal build directory used later)
As I'm unfamiliar with Bazel in general I'll just assume the list of headers is required and there exists a $(INCLUDEDIR)/google/protobuf folder and is somewhere (else) on the system, e.g. /usr/local/include.
Is there any way to get all *.h and *.inc files in the format (i.e. relative to $(INCLUDEDIR)) via a glob or similar? The Bazel glob function doesn't work for absolute paths, so that can't be used.
I found https://github.com/bazelbuild/bazel/issues/8846 suggesting to use new_local_repository with a build_file and a path set to (in this case) $(INCLUDEDIR) but I don't see how that could be applied to the tf_http_archive (which has some conditions to either download the dependency or just use the system_build_file). This seems to also allow to avoid the symlinking (which I'm highly suspicious of anyway because that folder is added via -iquote but include style is #include <...>, see my comments in https://github.com/tensorflow/tensorflow/issues/37861)
Bonus points for people contributing to the issue or ideas why action_env environment variables seem to be ignored in a native.cc_library call.

test-unit automatic runner produce "invalid option" error on rake tasks?

Since I've installed gon, my rake tasks aren't working anymore.
I'm using:
Rails 3.2.22.2
ruby 2.2.0p0
gon-6.0.1
test-unit-3.0.8
I can't uninstall test-unit because:
$ rails c
/Users/me/.rbenv/versions/2.2.0/gemsets/project-gems/gems/activesupport-3.2.22.2/lib/active_support/dependencies.rb:251:in `require': Please add test-unit gem to your Gemfile: `gem 'test-unit', '~> 3.0'` (cannot load such file -- test/unit/testcase) (LoadError)
If I rake -T for example:
rake about # List versions of all Rails frameworks and the environment
... (all rake tasks here) ...
rake tmp:create # Creates tmp directories for sessions, cache, sockets, and pids
invalid option: -T
Test::Unit automatic runner.
Usage: /Users/me/.rbenv/versions/2.2.0/gemsets/project-gems/bin/rake [options] [-- untouched arguments]
-r, --runner=RUNNER Use the given RUNNER.
(c[onsole], e[macs], x[ml])
--collector=COLLECTOR Use the given COLLECTOR.
(de[scendant], di[r], l[oad], o[bject]_space)
-n, --name=NAME Runs tests matching NAME.
Use '/PATTERN/' for NAME to use regular expression.
--ignore-name=NAME Ignores tests matching NAME.
Use '/PATTERN/' for NAME to use regular expression.
-t, --testcase=TESTCASE Runs tests in TestCases matching TESTCASE.
Use '/PATTERN/' for TESTCASE to use regular expression.
--ignore-testcase=TESTCASE Ignores tests in TestCases matching TESTCASE.
Use '/PATTERN/' for TESTCASE to use regular expression.
--location=LOCATION Runs tests that defined in LOCATION.
LOCATION is one of PATH:LINE, PATH or LINE
--attribute=EXPRESSION Runs tests that matches EXPRESSION.
EXPRESSION is evaluated as Ruby's expression.
Test attribute name can be used with no receiver in EXPRESSION.
EXPRESSION examples:
!slow
tag == 'important' and !slow
--[no-]priority-mode Runs some tests based on their priority.
--default-priority=PRIORITY Uses PRIORITY as default priority
(h[igh], i[mportant], l[ow], m[ust], ne[ver], no[rmal])
-I, --load-path=DIR[:DIR...] Appends directory list to $LOAD_PATH.
--color-scheme=SCHEME Use SCHEME as color scheme.
(d[efault])
--config=FILE Use YAML fomat FILE content as configuration file.
--order=ORDER Run tests in a test case in ORDER order.
(a[lphabetic], d[efined], r[andom])
--max-diff-target-string-size=SIZE
Shows diff if both expected result string size and actual result string size are less than or equal SIZE in bytes.
(1000)
-v, --verbose=[LEVEL] Set the output level (default is verbose).
(important-only, n[ormal], p[rogress], s[ilent], v[erbose])
--[no-]use-color=[auto] Uses color output
(default is auto)
--progress-row-max=MAX Uses MAX as max terminal width for progress mark
(default is auto)
--no-show-detail-immediately Shows not passed test details immediately.
(default is yes)
--output-file-descriptor=FD Outputs to file descriptor FD
-- Stop processing options so that the
remaining options will be passed to the
test.
-h, --help Display this help.
Deprecated options:
--console Console runner (use --runner).
Here's the culprit:
invalid option: -T
Test::Unit automatic runner.
With or without rspec, same error.
Current solution: I ended with those lines at the bottom of my application.rb:
Test::Unit::AutoRunner.need_auto_run = false if defined?(Test::Unit::AutoRunner)
first link
Test::Unit.run = true if defined?(Test::Unit) && Test::Unit.respond_to?(:run=)
second link
Anyone with a better idea?
Thank you!
ps: https://github.com/gazay/gon/issues/206
try this on your Gemfile
gem "test-unit", :require => false
or try test-unit 3.1.5.

Recursive rsync over ssh, include only one file extension

I'm trying to rsync files over ssh from a server to my machine. Files are in various subdirectories, but I only want to keep the ones that match a certain pattern (IE blah.txt). I have done extensive googling and searching on stackoverflow, and I've tried just about every permutation of --include and --excludes that have been suggested. No matter what I try, rsync grabs all files.
Just as an example of one of my attempts, I have used:
rsync -avze 'ssh' --include='*blah*.txt' --exclude='*' myusername#myserver.com:/path/top/files/directory /path/to/local/directory
To troubleshoot, I tried this command:
rsync -avze 'ssh' --exclude='*' myusername#myserver.com:/path/top/files/directory /path/to/local/directory
expecting it to not copy anything, but it still grabbed all of the files.
I am using rsync version 2.6.9 on OSX.
Is there something obvious I'm missing? I've been struggling with this for quite a while.
I was able to find a solution, with a caveat. Here is the working command:
rsync -vre 'ssh' --prune-empty-dirs --include='*/' --include='*blah*.txt' --exclude='*' user#server.com:/path/to/server/files /path/to/local/files
However! If I type this into my command line directly, it works. If I save it to a file, myfile.txt, and I try `cat myfile.txt` it no longer works! This makes no sense to me.
OSX follows BSD style rsync
https://www.freebsd.org/cgi/man.cgi?query=rsync&apropos=0&sektion=0&manpath=FreeBSD+8.0-RELEASE+and+Ports&format=html
-C, --cvs-exclude
This is a useful shorthand for excluding a broad range of files
that you often don't want to transfer between systems. It uses a
similar algorithm to CVS to determine if a file should be
ignored.
The exclude list is initialized to exclude the following items
(these initial items are marked as perishable -- see the FILTER
RULES section):
RCS SCCS CVS CVS.adm RCSLOG cvslog.* tags TAGS
.make.state .nse_depinfo *~ #* .#* ,* _$* *$ *.old *.bak
*.BAK *.orig *.rej .del-* *.a *.olb *.o *.obj *.so *.exe
*.Z *.elc *.ln core .svn/ .git/ .bzr/
then, files listed in a $HOME/.cvsignore are added to the list
and any files listed in the CVSIGNORE environment variable (all
cvsignore names are delimited by whitespace).
Finally, any file is ignored if it is in the same directory as a
.cvsignore file and matches one of the patterns listed therein.
Unlike rsync's filter/exclude files, these patterns are split on
whitespace. See the cvs(1) manual for more information.
If you're combining -C with your own --filter rules, you should
note that these CVS excludes are appended at the end of your own
rules, regardless of where the -C was placed on the command-
line. This makes them a lower priority than any rules you spec-
ified explicitly. If you want to control where these CVS
excludes get inserted into your filter rules, you should omit
the -C as a command-line option and use a combination of --fil-
ter=:C and --filter=-C (either on your command-line or by
putting the ":C" and "-C" rules into a filter file with your
other rules). The first option turns on the per-directory scan-
ning for the .cvsignore file. The second option does a one-time
import of the CVS excludes mentioned above.
-f, --filter=RULE
This option allows you to add rules to selectively exclude cer-
tain files from the list of files to be transferred. This is
most useful in combination with a recursive transfer.
You may use as many --filter options on the command line as you
like to build up the list of files to exclude. If the filter
contains whitespace, be sure to quote it so that the shell gives
the rule to rsync as a single argument. The text below also
mentions that you can use an underscore to replace the space
that separates a rule from its arg.
See the FILTER RULES section for detailed information on this
option.

Documenting CMake scripts

I find myself in a situation where I would like to accurately document a host of custom CMake macros and functions and was wondering how to do it.
The first thing that comes to mind is simply using the built-in syntax and only document scripts, like so:
# -----------------------------
# [FUNCTION_NAME | MACRO_NAME]
# -----------------------------
# ... description ...
# -----------------------------
This is fine. However, I'd like to employ common doc generators, for instance doxygen, to also generate external documentation that can be read by anyone without looking at the implementation (which is a common scenario).
One way would be to write a simple parser that generates a corresponding C/C++ header with the appropriate signatures and documentation directly from the CMake script, which could the be processed by doxygen or comparable tools. One could also maintain such a header by hand - which is obviously tedious and error prone.
Is there any other way to employ a documentation generator with CMake scripts?
Here is the closest I could get. The following was tested with CMake 2.8.10. Currently, CMake 3.0 is under development which will get a new documentation system based on Sphinx and reStructuredText. I guess that this will bring new ways to document your modules.
CMake 2.8 can extract documentation from your modules, but only documentation at the beginning of the file is considered. All documentation is added as CMake comments, beginning with a single #. Double ## will be ignored (so you can add comments to your documentation). The end of documentation is marked by the first non-comment line (e.g. an empty line)
The first line gives a brief description of the module. It must start with - and end with a period . or a blank line.
# - My first documented CMake module.
# description
or
# - My first documented CMake module
#
# description
In HTML, lines starting with at two or more spaces (after the #) are formatted with monospace font.
Example:
# - My custom macros to do foo
#
# This module provides the macro foo().
# These macros serve to demonstrate the documentation capabilietes of CMake.
#
# FOO( [FILENAME <file>]
# [APPEND]
# [VAR <variable_name>]
# )
#
# The FOO() macro can be used to do foo or bar. If FILENAME is given,
# it even writes baz.
MACRO( FOO )
...
ENDMACRO()
To generate documentation for your custom modules only, call
cmake -DCMAKE_MODULE_PATH:STRING=. --help-custom-modules test.html
Setting CMAKE_MODULE_PATH allows you to define additional directories to search for modules. Otherwise, your modules need to be in the default CMake location. --help-custom-modules limits the documentation generation to custom, non-CMake-standar modules. If you give a filename, the documentation is written to the file, to stdout otherwise. If the filename has a recognized extension, the documentation is formatted accordingly.
The following formats are possible:
.html for HTML documentation
.1 to .9 for man page
.docbook for Docbook
anything else: plain text

Optimize non-layered files in dojo build

How do I tell the dojo build system to run the shrinksafe optimization on files NOT included in a layer but in the perfixes directories?
Thanks
There are two optimization parameters for custom builds: optimize and layerOptimize. In your case, you would need to set optimize=shrinksafe.
optimize Specifies how to optimize module files. If "comments"
is specified, then code comments are
stripped. If "shrinksafe" is
specified, then the Dojo compressor
will be used on the files, and line
returns will be removed. If
"shrinksafe.keepLines" is specified,
then the Dojo compressor will be used
on the files, and line returns will
be preserved. If "packer" is
specified, Then Dean Edwards' Packer
will be used Default: "",
layerOptimize Specifies how to optimize the layer files. If
"comments" is specified, then code
comments are stripped. If
"shrinksafe" is specified, then the
Dojo compressor will be used on the
files, and line returns will be
removed. If "shrinksafe.keepLines" is
specified, then the Dojo compressor
will be used on the layer files, and
line returns will be preserved. If
"packer" is specified, Then Dean Edwards' Packer will be used Default: "shrinksafe",
You need to declare the folder that contains the files to optimise as a package in the build profile so that it can be optimised.
packages: [
{
name:"dojo",
location:"dojo"
},
{
name:"filesToOptimise",
location:"folderLocation"
}
]
Make sure you have a profile.js and package.json in that directory and optimize:"shrinksafe" option in your build profile as well.