How to add extra info to BenchmarkDotNet summary? - hostname

How can I add extra informationto extend the summary information produced by BenchmarkDotNet?
Like:
current host name or
current (Git) branch name
I would like to achieve something similar to this example:
Host MachineName: <Environment.MachineName>
Branch: <Git-Branch-Name>
BenchmarkDotNet=v0.13.1, OS=Windows 10...
Intel Core i7...
[Host] : .NET Framework 4.8 (4.8.4300.0), X64 RyuJIT
Dry : .NET Framework 4.8 (4.8.4300.0), X64 RyuJIT
Job=Dry IterationCount=1 LaunchCount=1
RunStrategy=ColdStart UnrollFactor=1 WarmupCount=1
| Method | Mean | Error |
|----------------- |----------- |------ |
| Foo | 1,940.3 ms | NA |

Currently there is no way to extend the Summary with extra data. All you can do is to implement a custom column and add it to the config: https://benchmarkdotnet.org/articles/configs/columns.html

Related

Intellij idea 2021.2 cannot render markdown table currently?

I use idea 2021.2 and corresponding markdown plugin. But the simple table cannot display in previous mode:
Why? I found somebody has related the problem to the JavaFx, however, I think it is occured in old version idea. I cannot find javafx render option in my version.
How to solve it?
You are using the wrong Markdown syntax.
The following code works fine (minimum three - for table header)
| Column 1 | Column 2 |
| :--- | :--- |
| AAA | BBB |

Can GraphDB load 10 million statements with OWL reasoning?

I am struggling to load most of the Drug Ontology OWL files and most of the ChEBI OWL files into GraphDB free v8.3 repository with Optimized OWL Horst reasoning on.
is this possible? Should I do something other than "be patient?"
Details:
I'm using the loadrdf offline bulk loader to populate an AWS r4.16xlarge instance with 488.0 GiB and 64 vCPUs
Over the weekend, I played around with different pool buffer sizes and found that most of these files individually load fastest with a pool buffer of 2,000 or 20,000 statements instead of the suggested 200,000. I also added -Xmx470g to the loadrdf script. Most of the OWL files would load individually in less than one hour.
Around 10 pm EDT last night, I started to load all of the files listed below simultaneously. Now it's 11 hours later, and there are still millions of statements to go. The load rate is around 70/second now. It appears that only 30% of my RAM is being used, but the CPU load is consistently around 60.
are there websites that document other people doing something of this scale?
should I be using a different reasoning configuration? I chose this configuration as it was the fastest loading OWL configuration, based on my experiments over the weekend. I think I will need to look for relationships that go beyond rdfs:subClassOf.
Files I'm trying to load:
+-------------+------------+---------------------+
| bytes | statements | file |
+-------------+------------+---------------------+
| 471,265,716 | 4,268,532 | chebi.owl |
| 61,529 | 451 | chebi-disjoints.owl |
| 82,449 | 1,076 | chebi-proteins.owl |
| 10,237,338 | 135,369 | dron-chebi.owl |
| 2,374 | 16 | dron-full.owl |
| 170,896 | 2,257 | dron-hand.owl |
| 140,434,070 | 1,986,609 | dron-ingredient.owl |
| 2,391 | 16 | dron-lite.owl |
| 234,853,064 | 2,495,144 | dron-ndc.owl |
| 4,970 | 28 | dron-pro.owl |
| 37,198,480 | 301,031 | dron-rxnorm.owl |
| 137,507 | 1,228 | dron-upper.owl |
+-------------+------------+---------------------+
#MarkMiller you can take a look at the Preload tool, which is part of GraphDB 8.4.0 release. It's specially designed to handle large amount of data with constant speed. Note that it works without inference, so you'll need to load your data and then change the ruleset and reinfer the statements.
http://graphdb.ontotext.com/documentation/free/loading-data-using-preload.html
Just typing out #Konstantin Petrov's correct suggestion with tidier formatting. All of these queries should be run in the repository of interest... at some point in working this out, I misled myself into thinking that I should be connected to the SYSTEM repo when running these queries.
All of these queries also require the following prefix definition
prefix sys: <http://www.ontotext.com/owlim/system#>
This doesn't directly address the timing/performance of loading large datasets into an OWL reasoning repository, but it does show how to switch to a higher level of reasoning after loading lots of triples into a no-inference ("empty" ruleset) repository.
Could start by querying for the current reasoning level/rule set, and then run this same select statement after each insert.
SELECT ?state ?ruleset {
?state sys:listRulesets ?ruleset
}
Add a predefined ruleset
INSERT DATA {
_:b sys:addRuleset "rdfsplus-optimized"
}
Make the new ruleset the default
INSERT DATA {
_:b sys:defaultRuleset "rdfsplus-optimized"
}
Re-infer... could take a long time!
INSERT DATA {
[] <http://www.ontotext.com/owlim/system#reinfer> []
}

postgres 9.5 create function plpthon3u resets connections to server

I have installed postgresql 9.5 on windows 10, x64.
I have created the extension plpython3u with python 3.3.5 on the server's path and it appeared to create the extension successfully:
SELECT * FROM pg_available_extensions
WHERE name like '%python%' order by name;
name | default_version | installed_version | comment
-------------------+-----------------+-------------------+------------------------------------------
-
hstore_plpython2u | 1.0 | | transform between hstore and plpython2u
hstore_plpython3u | 1.0 | | transform between hstore and plpython3u
hstore_plpythonu | 1.0 | | transform between hstore and plpythonu
ltree_plpython2u | 1.0 | | transform between ltree and plpython2u
ltree_plpython3u | 1.0 | | transform between ltree and plpython3u
ltree_plpythonu | 1.0 | | transform between ltree and plpythonu
plpython2u | 1.0 | | PL/Python2U untrusted procedural language
plpython3u | 1.0 | 1.0 | PL/Python3U untrusted procedural language
plpythonu | 1.0 | | PL/PythonU untrusted procedural language
(9 rows)
However when I attempt to create the following function (from the pg docs)
CREATE FUNCTION pymax (a integer, b integer)
RETURNS integer
AS $$
if a > b:
return a
return b
$$ LANGUAGE plpython3u;
the psql (or pgadmin3) terminal's connection is reset.
The python 3.3 on the path is anaconda's distb and runs fine on its own. I couldn't find the required version of python in the postgresql docs and used dependency walker as described here Postgres database crash when installing plpython to find the required dll that plpython3.dll in the server's lib/ points to.
Can anyone help me with what I have missed?
Many thanks
Looking more carefully at the installation download, I read the readme.txt. This clearly lays out how to include the language packs including plpython. No need to muck around with dependency walker or anything like that.
Following the clear and simple instructions in the readme.txt is all it took to get the plpython extension working fine. No excuse for not reading the readme. My bad.
I was not matching the required version of python. The bottom line is that postgresql does seem to be relatively sensitive to the particular distribution of python, not just version - (I had matched the versions postgres python distb 3.3.4 and anaconda 3.3.4.)
Specifically, setting the server's path to use the python installed along with the server, C:\EnterpriseDB\LanguagePack\9.5\x64\Python-3.3 in my case, was all that it took to get it working correctly.
Thanks go to Adrian Klaver on the pgsql-general mailing list for getting me sorted. This answer is just for future reference as I claim it is easy to miss the readme :-).

Get version of rich edit library

ALL,
Is it possible to get the version of the RichEdit control the program uses?
| Version | Class name | Library | Shipped with | New features
|------------|---------------|--------------|-----------------|
| 1.0 | "RICHEDIT" | Riched32.dll | Windows 95 |
| 2.0 | "RichEdit20W" | Riched20.dll | Windows 98 | ITextDocument
| 3.0 | "RichEdit20W" | Riched20.dll | Windows 2000 | ITextDocument2
| 3.1 | "RichEdit20W" | Riched20.dll | Server 2003 |
| 4.1 | "RICHEDIT50" | Msftedit.dll | Windows XP SP1 | tomApplyTmp
| 7.5 | "RICHEDIT50" | Msftedit.dll | Windows 8 | ITextDocument2 (new), ITextDocument2Old, Spell checking, Ink support, Office Math
| 8.5 | "RICHEDIT50" | Msftedit.dll | Windows 10 | LocaleName, more image formats
I know I can just have some variable and assign it appropriately if Msftedit.dll library is loaded or not. However if I do load RichEd20.dll, I can get either RichEdit 2 or RichEdit 3 implementation. And they are quite different. A lot of stuff were added in the latter.
If i did load Msftedit.dll, there are features that 7.5 that would not be available in earlier versions (e.g. automatic spell checking).
It's even possible that the same process can have all three DLLs loaded, and even using all three versions of RichEdit in the same process:
"RICHEDIT" → 1.0
"RichEdit20W" → 2.0, 3.0
"RICHEDIT50" → 4.1, 7.5, 8.5
Given a RichEdit control (e.g. WinForms RichTextBox, WPF RichTextBox, WinRT RichEditBox, VCL TRichEdit) is there a way to determine the version of a RichEdit control?
Or maybe I can somehow differentiate them by Windows version where it is available?
If using c++ you may find the following snippet useful to read out the class name :
TCHAR className[MAX_PATH];
GetClassName(GetRichEditCtrl().GetSafeHwnd(), className, _countof(className));
GetRichEditCtrl() is function on another control, you may need to substitute with whatever gives you a hwnd to the control.
Another method is using a tool like spy++ to inspect the class name.

MSBuild copy task for all solution outputs without project\bin\configuration

I have an MSBuild script that builds all solutions in my repository, but now I need a way to copy all of the output to a build directory. I was trying to use the Output parameter in the build task to know which files to copy, but RecursiveDir can't be used with that parameter: MSBuild RecursiveDir is empty (you can see my build script here too). Anyway, I have this folder structure:
Repository
|
+- Solution1
| |
| +- ProjectA
| | |
| | +- bin
| | |
| | +- Release
| |
| +- ProjectB
| |
| +- bin
| |
| +- Release
|
+- Solution2
| |
| +- ProjectA
| |
| +- bin
| |
| +- x86
| |
| +-Release
| |
| +- images
|
...etc
Basically, I just want to copy the contents of each Release folder, including subfolder structure and contents, into the following structure:
Build
|
+- Solution1
|
+- Solution2
| |
| +- images
|
...etc
I.e. I want to strip the Project\bin\platform\configuration part of the path. I don't want to have to manually include each project, because new ones pop up every so often and it would be nice not to have to update the build script every time. Seems simple enough but I can't figure it out...
I've seen MsBuild Copy output and remove part of path but I don't really understand it so I don't know how to apply it here.
Have you tried overriding the output path parameter?
For example if you call msbuild on each solution
C:\Windows\Microsoft.NET\Framework64\v4.0.30319\msbuild.exe $(SolutionName) /p:OutputPath="%CD%\Build"
This will redirect your output from your projects with no configuration to deal with.
Just look in \Build for your output (Its content depends on your project type)