Confusion between CMake preset fields and cache variables - cmake

There is pretty new but cool feature in CMake: presets
I am confused with some of the possible values of preset: toolchainFile and installDir. This values could be set using simple cache variables, using cacheVariables entries (strictly CMAKE_TOOLCHAIN_FILE and CMAKE_INSTALL_PREFIX).
There is only a mention about toolchainFile in documentation, that states:
This field takes precedence over any CMAKE_TOOLCHAIN_FILE value.
This does not resolves my confusion. The question is: Which method should I use and what is the difference?

[...] and what is the difference?
The docs explain the difference:
If a relative path is specified, it is calculated relative to the build directory, and if not found, relative to the source directory.
The cache variable is expected to be an absolute path (typically computed from ${sourceDir}).
The question is: Which method should I use[?]
You should probably use the more specific option. It's a bit clearer and marginally less typing. Not a huge deal either way.

Related

Does histogram_summary respect name_scope

I am getting a Duplicate tag error when I try to write out histogram summaries for a multi-layer network that I generate procedurally. I think that the problem might be related to naming. Imagine code like the following:
with tf.name_scope(some_unique_name):
...
_ = tf.histogram_summary('weights', kernel_weights)
I'd naively assumed that 'weights' would be scoped to some_unique_name but I'm suspecting that it is not. Are summary names independent of name_scope?
As Dave points out, the tag argument to tf.histogram_summary(tag, ...) is indeed independent of the current name scope. Part of the reason for this is that the tag may be a string Tensor (i.e. computed by part of your graph), whereas name scopes are a purely client-side construct (i.e. Python-only), so there's no good way to make the scoping work consistently across the two modes of use.
However, if you're using TensorFlow build from source (and should be available in the next release, 0.8.0), you can use the following recipe to scope your tags (using Graph.unique_name(..., mark_as_used=False)):
with tf.name_scope(some_unique_name):
# ...
tf.histogram_summary(
tf.get_default_graph().unique_name('weights', mark_as_used=False),
kernel_weights)
Alternatively, you can do the following in the current version:
with tf.name_scope(some_unique_name) as scope:
# ...
tf.histogram_summary(scope + 'weights', kernel_weights)
They are.
I'm with you in thinking this is a bug, but I haven't run it past the designers of the op yet. Go ahead and open an issue for it on GitHub!
(I've run into this also and found it terribly annoying -- it prevents reuse of the model without deliberately parameterizing the summary op invocations.)

CustomActionData with semi-colon separated, causes string overflow - What are the common workaround to this solution?

There were few attempts of questions answerered in regards to ICE03 (String overflow) for CustomActionData, but I cannot seem to determine/conclude the correct (or accepted) practice of how to go around this issue.
My current resolution was to reduce the length of the key-value-pair by keeping both the key and property names short, i.e. from:
<CustomAction Id="MyCustomActionData"
Property="MyCustomActionCA"
Value="myKeyName1=[SOME_PROPERTY_NAME];myKeyName2=[SOME_DESCRIPTIVE_PROPNAME]"/>
to:
<CustomAction Id="MyCustomActionData"
Property="MyCustomActionCA"
Value="k1=[K1];k2=[K2]"/>
But I feel that I'm just sweeping the problem under the rug and sooner or later, I'll encounter again (also, this is based on assumptions of my additional question below).
The more obvious solution is the re-evaluate and re-design it so that least amount of data needs to be passed down to the C# CustomAction (the classic "why would you want to declare a function method to pass 20 parameters?" question by all code-reviewers). Obviously, for most languages today, we can easily redesign the API and pass an object (as a class, struct, etc - depends on languages) that self-contains what it needs, but how does one go about it for inter-process calls (I've seen JSON RPC messages with reasonably large data and I'd usually wonder if it was because somebody tried to fix some legacy code by adding more and more until it got bloated rather than sitting down and re-design, which is not possible on some "11th hour" deadline that just has to get fixed in shortest time allowed).
Perhaps the solution is to create an XML file and use expat ('util:XmlFile') to search and replace the key-value-pair before calling CustomAction, and pass the filename of the altered XML as CustomActionData for CustomAction to use, which then in C# CustomAction code, it just deserializes it and treats it as objects. But that too feels a little klunky (it may also confuse the next developer who takes over my task in the future), not to mention if it was passwords we'd want to not have it in an XML file and keep it as Property with Hidden="yes"...
So my question is, what are the clean/elegant solutions or pattern (or practices) to resolve this issue of passing CustomActionData that may exceed table column size?
If I may also ask an additional question which is somewhat related, I am assuming that the linker (light) warning LGHT1076 is based on the length of the value (i.e. "keyA=[A];keyB=[B]") being too long, and so if I chose very short property variable and key-names, it would most likely not trigger this warning. But from what I understand, the table column size is 255 characters (please correct me if I'm wrong) thus during the run-time, if property value is longer than column size, it can cause some issue (or truncated)?
The solution I use is to create multiple properties and then concatenate the properties at the end into a single property, this way:
<CustomAction Id="SetSqlProperties"
Property="SqlProperties"
Value="SQL_LOGIN_ID=[SQL_LOGIN_ID];SQL_PASSWORD=[SQL_PASSWORD];
SQL_AUTH_TYPE=[SQL_AUTH_TYPE];SQL_SERVERS=[SQL_SERVERS]" />
<CustomAction Id="SetServerProperties"
Property="ServerProperties"
Value="Domain=[DOMAIN];ComputerName=[COMPUTER_NAME];
FullServerName=[FULLCOMPUTERNAME];Version=[ProductVersion];
ServerType=[SERVER_TYPE];SrvMode=[SrvMode]" />
<CustomAction Id="SetPropertiesConfigReplace"
Property="ConfigReplace"
Value="InstallFolder=[INSTALLFOLDER];[ServerProperties];[SqlProperties]" />
In this example I would use the property [ConfigReplace] containing all values from SQL Server and local server.
About the ICE03, in the documentation you can find this:
The string's length is greater than the column width specified by the column definition. Note that the installer does not internally limit the column width to the specified value. See Column Definition Format.
MSDN

gulp-newer vs gulp-changed

What're the differences between them?
gulp-newer:
gulp.src(imgSrc)
.pipe(newer(imgDest))
.pipe(imagemin())
.pipe(gulp.dest(imgDest));
gulp-changed:
gulp.src(SRC)
.pipe(changed(DEST))
// ngmin will only get the files that
// changed since the last time it was run
.pipe(ngmin())
.pipe(gulp.dest(DEST));
It seems gulp-changed is more powerful, because it provides an option
hasChanged: changed.compareLastModifiedTime
I hope it's not too late to answer this question. I have had to evaluated both of them at a source-code level for a recent project, and here is my take.
gulp-newer
At the core, this plugin compares the source and dest file's modified time (see node API) to decide whether the source file is newer than the dest file or if there is no dest file at all. Here is the related code in the plugin:
var newer = !destFileStats || srcFile.stat.mtime > destFileStats.mtime;
gulp-changed
This plugin by default also uses a file's modified time to decide which to pass through the stream
function compareLastModifiedTime(stream, cb, sourceFile, targetPath) {}
but it goes one step further by offering an option to compare the file's content SHA1 hash:
function compareSha1Digest(stream, cb, sourceFile, targetPath) {}
This information is nicely documented.
Conclusion
So theoretically speaking, if you use gulp-changed's default hasChanged: changed.compareLastModifiedTime, each plugin is relatively as fast as the other. If you use gulp-changed's hasChanged: changed.compareSha1Digest, it's reasonable to expect gulp-changed to be a bit slower because it does create a SHA1 hash of the file content. I didn't benchmark but I'm also interested in seeing some number.
Which to choose
gulp-changed, purely because of the developer behind it (sindresorhus). If one day this awesome man decides that he will stop supporting his gulp plugins, I think I will stop using gulp altogether.
Joking aside, though, gulp-changed's source code is gulp-y, while gulp-newer's source reads pretty much like just another node module's source with lots of promises. So another +1 for gulp-changed :)
HUGE EDIT
Gulp-changed only works with 1:1 source:dest mapping. If you need many:1, e.g. when using with gulp concat, choose gulp-newer instead.
May I suggest gulp-newy in which you can manipulate the path and filename in your own function. Then, just use the function as the callback to the newy(). This gives you complete control of the files you would like to compare.
This will allow 1:1 or many to 1 compares.
newy(function(projectDir, srcFile, absSrcFile) {
// do whatever you want to here.
// construct your absolute path, change filename suffix, etc.
// then return /foo/bar/filename.suffix as the file to compare against
}
In order to answer this question you will have to compare both plugins source code.
Seems that gulp-changed has more options as you have said, more used (it was downloading more time) and more contributors, thus, it could be more updated and refactored, as it was being used more.
Something that can make a difference, due to them documentation.
On the example, for gulp-newer, its used like this:
gulp.task('default', function() {
gulp.watch(imgSrc, ['images']);
});
Thus, seems that once this task is running, it will only notice files that are changing while you are using this plugin.
On gulp-changed, they say: "will only get the files that changed since the last time it was run". So, and I didnt try this on a working example, that gulp-changed proccess all files and then only the ones that have been changed since last execution, so seems it will always "look" at all files and internally (md5 hash? no clue, didnt check the source) decide whereas a file has changed since last execution. Do not need a watcher for that.
All this, was only reading their official documentation.
A "on the wild test" would be very welcomed !

Dynamic name resolution

Howcome some languages like PHP and Python use dynamic name resolution?
The only time I've ever thought of using it is to do something like this Python code, to save me from having to explicitly parameters to format:
"{a} {b} {c} {d}".format(**locals())
but it doesn't really take much work to just be explicit (and is a bit less error-prone):
"{a} {b} {c} {d}".format(a=a, b=b, c=c, d=d)
And for setting/getting locals in the same scope, I don't see why anyone would ever use that instead of a map.
Without dynamic name resolution, typos are caught, and you can automatically rename variables without breaking your program (unless something can still read the names of the variables). With dynamic name resolution, you get something that saves you from typing a line? Am I missing something?
Python documentation says they might remove it in the future. Is it more of a historical thing? What's an actual good use case for dynamic name resolution?
Most dynamically typed languages simply don't have a choice. For an expression like x.y you can't look up y statically, since what fields are available depends on the type of x which is only available at runtime.
There are ways around this (such as type inference or JIT), but since the base language has to have dynamic name lookup, most such languages make it into a feature (see e.g. the power of Lua tables).

Is there any way I can define a variable in LaTeX?

In LaTeX, how can I define a string variable whose content is used instead of the variable in the compiled PDF?
Let's say I'm writing a tech doc on a software and I want to define the package name in the preamble or somewhere so that if its name changes, I don't have to replace it in a lot of places but only in one place.
add the following to you preamble:
\newcommand{\newCommandName}{text to insert}
Then you can just use \newCommandName{} in the text
For more info on \newcommand, see e.g. wikibooks
Example:
\documentclass{article}
\newcommand\x{30}
\begin{document}
\x
\end{document}
Output:
30
Use \def command:
\def \variable {Something that's better to use as a variable}
Be aware that \def overrides preexisting macros without any warnings and therefore can cause various subtle errors. To overcome this either use namespaced variables like my_var or fall back to \newcommand, \renewcommand commands instead.
For variables describing distances, you would use \newlength (and manipulate the values with \setlength, \addlength, \settoheight, \settolength and \settodepth).
Similarly you have access to \newcounter for things like section and figure numbers which should increment throughout the document. I've used this one in the past to provide code samples that were numbered separatly of other figures...
Also of note is \makebox which allows you to store a bit of laid-out document for later re-use (and for use with \settolength...).
If you want to use \newcommand, you can also include \usepackage{xspace} and define command by \newcommand{\newCommandName}{text to insert\xspace}.
This can allow you to just use \newCommandName rather than \newCommandName{}.
For more detail, http://www.math.tamu.edu/~harold.boas/courses/math696/why-macros.html
I think you probably want to use a token list for this purpose:
to set up the token list
\newtoks\packagename
to assign the name:
\packagename={New Name for the package}
to put the name into your output:
\the\packagename.