ROOT's TMVA user guide - root-framework

I am using ROOT's TMVA (developed by CERN), the version of the ROOT is 6.24.
the user manual i have is for TMVA version 4.3.0 (for ROOT >= 6.12/00 on May 26, 2020)
but the manual seems to be a little bit different from my current version (for example, the options available for a particular machine learning model).
is there any updated user manual, or portals that provide guides on the options available for a particular machine learning model.

I was looking at the same thing as you today (Planning to work on SVM machine learning method).
From a previous post of the 4th of April, see : https://root-forum.cern.ch/t/does-up-to-date-tmva-user-guide-exists/49465/2, it seems that the latest update on the TMVA user guide is the one of May 2020.
ROOT devs are saying that the latest "update" on how to use TMVA can be found on https://root.cern.ch/doc/master/group__tutorial__tmva.html (which I did not try) or the file TMVAClassification.C (https://root.cern.ch/doc/master/TMVAClassification_8C.html)
Hope you find what you want!

Related

F#: Is Profile 47 required for Microsoft.FSharp.Data.TypeProviders?

This is a follow-up to my post yesterday. To recap, I received this error message when trying to build my project:
FSC: Error FS2024: Static linking may not use assembly that targets
different profile
I consulted some kind people in the F# fpchat.com channel, and one of them suggested that the error could be due to the fact that I did not have Profile 47, because FSharp.Data uses Profile 47. I tried downloading the target pack for Profile 47, but was redirected to the Microsoft homepage instead. I tried the second answer on this SO page, but that did not work either. As of now, I am still unable to acquire Profile 47.
I consulted the FSharp.Data GitHub page, but it is not clear to me why Profile 47 is needed. I use VS2013 compiling to FSharp.Core 4.3.0.0; shouldn't that be sufficient, since the GitHub page lists it as one of the supported platforms?
I have created a new project, re-added all my source files and references, and tried re-building. I have even tried uninstalling and then re-installing Microsoft VS, even though I know it is likely irrelevant.
I think it is most probable that the problem lies with referencing FSharp.Data.TypeProviders. The error message does not appear insofar as I exclude reference to FSharp.Data.TypeProviders. The strangest thing is that I have not changed my references at all over the past week or so, but the error message only appeared yesterday.
So, my questions are:
Is Profile 47 really required? If so, how may I acquire it?
Even if I do acquire Profile 47, wouldn't I still experience trouble building my project, since my other references do not target Profile 47?
Are there any approaches that I may not have considered?
After tearing my hair out trying a variety of suggested solutions I found online, I discovered that the only way to resolve the issue was to change my Target Framework from 4.5 to 4.0, re-install all of my references to ensure compatibility with .NET 4.0, and then re-build my solution. Using .NET 4.0 means that I am no longer able to use Microsoft.Experimental.Collections, since it is compatible only with .NET 4.5. This means that I will have to re-write all of my code that makes use of Microsoft.Experimental.Collections, but I consider that a lesser evil than not being able to build my project at all.
So, to answer my own question mentioned in the title, Profile 47 is not required to use FSharp.Data.TypeProviders. :-)
EDIT:
I have found another solution to my problem. I created a new project (again), migrated all my source files over, re-installed all the DLLs I need, and this time I have no trouble whatsoever building my project. I think Carsten (who wrote the first comment to this answer) is correct in saying that the versions became messed up in the original solution files.

how to get the netbsd precise version about book code Reading

i recently read one Book "CODE Reading The open Source Perspective ", wiki link is here.
I notice that it use a
netbsd "netbsd export-19980407"
as its source to dig into.
My question is how to get the precise version of that netbsd?
So i an get more precise code-relating and field experience when use this book.
Since this is an ''export'' it probably means it was a CVS export of the main NetBSD tree.
See: http://netbsd.org/releases/formal.html
A date of 1998 / April / 07 (this is how I read the date of the export) - so that would be a CVS version between NetBSD 1.3.1 and NetBSD 1.3.2.
Make of that what you will, since I am not a NetBSD expert (but I have installed and used it a dozen times already).
FWIW, NetBSD newest version is 6.1 - yes, 1998 was 15 years ago, and I was much younger then...

What version numbering scheme to use?

I'm looking for a version numbering scheme that expresses the extent of change, especially compatiblity.
Apache APR, for example, use the well known version numbering scheme
<major>.<minor>.<patch>
example: 4.5.11
Maven suggests a similar but more detailed schema:
<major>.<minor>.<patch>-<qualifier>-<build number>
example: 4.5.11-RC1-3732
Where is the Maven versioning scheme defined? Are there conventions for qualifier and build number? Probably it is a bad idea to use maven but not to follow the Maven version scheme ...
What other version numbering schemes do you know? What scheme would you prefer and why?
I would recommend the Semantic Versioning standard, which the Maven versioning system also appears to follow. Please check out,
http://semver.org/
In short it is <major>.<minor>.<patch><anything_else>, and you can add additional rules to the anything else part as seems fit to you. eg. -<qualifier>-<build_number>.
Here is the current Maven version comparison algorithm, and a discussion of it. As long as versions only grow, and all fields except the build number are updated manually, you're good. Qualifiers work like this: if one is a prefix of the other, longer is older. Otherwise they are compared alphabetically. Use them for pre-releases.
Seconding the use of semantic versioning for expressing compatibility; major is for non-backwards compatible changes, minor for backward-compatible features, patch for backward-compatible bugfixes. Document it so your library users can express dependencies on your library correctly. Your snapshots are automated and don't have to increment these, except the first snapshot after a release because of the way prefixes are compared.
Purely for completeness, i will mention the old Apple standard for version numbers. This looks like major version. minor version. bug version. stage. non-release revision. Stage is a code drawn from the set d (development), a (alpha), b (beta), or fc (final customer ship - more or less the same as release candidate, i think).
The stage and non-release revision are only used for versions short of proper releases.
So, the first version of something might be 1.0.0. You might have released a bugfix as 1.0.1, a new version (with more features) as 1.1, and a rewrite or major upgrade as 2.0. If you then wanted to work towards 2.0.1, you might start with 2.0.1d1, 2.0.1d2, on to 2.0.1d153 or whatever it took you, then send out 2.0.1a1 to QA, and after they approved 2.0.1a37, send 2.0.1b1 to some willing punters, then after 2.0.1b9 survived a week in the field, burn 2.0.1fc1 and start getting signoffs. When 2.0.1fc17 got enough, it would become 2.0.1, and there would be much rejoicing.
This format was standardised enough that there was a packed binary format for it, and helper routines in the libraries for doing comparisons.
After reading a lot of articles/QAs/FAQs/books I become to think
that [MAJOR].[MINOR].[REV] is most useful versioning schema to
describe compatibility between project version (versioning schema
for developer, does not for marketing).
MAJOR changes is backward incompatible and require changing
project name, path to files, GUIDs, etc.
MINOR changes is backward compatible. Mark introduction of new
features.
REV for security/bug fixes. Backward and forward compatible.
This versioning schema inspired by libtool versioning semantics and by articles:
http://www106.pair.com/rhp/parallel.html
NOTE: I also recommend provide build/date/custom/quality as additional info (build
number, build date, customer name, release quality):
Hello app v2.6.34 for National bank, 2011-05-03, beta, build 23545
But this info is not versioning info!
Note that a version number scheme (like x.y.0 vs. x.y) can be constrained by external factors.
Consider that announcement for Git 1.9 (Januaury 2014):
A release candidate Git v1.9-rc2 is now available for testing at the usual places.
I've heard rumours that various third-party tools do not like the two-digit version numbers (e.g. "Git 2.0") and started barfing left and right when the users install v1.9-rc1.
While it is tempting to laugh at them for their sloppy assumption, I am also practical and
do not mind calling the upcoming release v1.9.0 to help them.
If we go that route (and I am inclined to go that route at this moment), the versioning scheme will be:
The next release candidate will be v1.9.0-rc3, not v1.9-rc3;
The first maintenance release for v1.9.0 will be v1.9.1 (and Nth one be v1.9.N); and
The feature release after v1.9.0 will be either v1.10.0 or v2.0.0, depending on how big the feature jump we are looking at.

Tips on Using Bison --graph=[file] on Linux

Recently (about a month ago) I was trying to introduce new constructs to my company's in-house extension language, and struggling with a couple of reduce-reduce errors. While I eventually solved this problem, digging into the y.output file was no picnic.
As an experiment, I tried using Bison's --graph=<file> option to output a DOT file (note that our standard build uses Byacc, not Bison). As I'm on a 'turnkey' Linux box, I didn't have a Graphviz installation and could not easily install from RPMs (working on Red Hat Enterprise Linux 4). Instead, I built it from source.
As an initial experiment, I tried to run dotty with an output of Postscript. Now our internal language is your average home-grown, Turing-complete, dynamically typed scripting language, but I was unprepared for what followed. The dotty run took over four hours (2GHz dual core AMD64 box)! And when it was done, the graph that was rendered was not what I would call readable.
So, quite simply, I'm looking for advice. Are there a set of switches which would improve the outcome over the 'default' approach I took? I'm looking for experience in
optimizing 'render' time
improving readability of the graph
possible advice on better graphical viewers
I imagine you've already seen this link, but just for completeness, there is a list of viewers etc. at: http://graphviz.org/resources/ or see https://web.archive.org/web/20131005020548/http://graphviz.org/Resources.php for an archived copy.

Where can I find TTY and curses documentation for Unix?

I'm working on automation tools for an ERP program running on SCO Unix.
See my questions on Expect:
(Tcl/Expect) clear screen after exit
Expect - get variable from screen region based on row and column
Where can I find (either locally or on the web) resources for understanding what control characters are used in a session, and more specifically, determining a field location on the screen during an interaction with the ERP program?
The specific control characters for a given terminal type are stored in the terminfo database. curses reads the value of $TERM when initializing and uses it to find and extract the relevant sequences for the various terminal operations.
I'm not really clear what you are asking, but one source of documentation on curses is the GNU implementation at http://www.gnu.org/software/ncurses. As far as 'control characters' go, well that depends on what terminal you use - yours probably understands ANSI codes - see http://en.wikipedia.org/wiki/ANSI_escape_code.
I just found out that the X/Open Group released a new version of their standard in November 2009 (previous version was released in 1996), and it is available free on the web from their bookstore as Technical Standard - X/Open Curses, Issue 7. You have to register, but access is free (and registration does not lead to an inundation of email, etc).
The previous version, Issue 4, Version 2 (from July 1996), is no longer available from X/Open. Given the newness of Issue 7, the new features are unlikely to be widely implemented yet, but look for changes in the next few years.