How to extract histograms from a root file and print them but writing them in a macro to run - root-framework

So when I extract a histogram from my root file I do it the following way
root -l output_idAntiId_Mc16a.root
root [0]
Attaching file output_idAntiId_Mc16a.root as _file0…
(TFile ) 0x7f8b9cba9470
root [1] .ls
TFile* output_idAntiId_Mc16a.root
TFile* output_idAntiId_Mc16a.root
KEY: TDirectoryFile plotEvent;1 plotEvent
KEY: TDirectoryFile pass_wgantiidcr_all_e_Nominal;1 pass_wgantiidcr_all_e_Nominal
KEY: TDirectoryFile pass_wgantiidcr_all_u_Nominal;1 pass_wgantiidcr_all_u_Nominal
root [2] pass_wgantiidcr_all_e_Nominal->cd()
(bool) true
root [3] .ls
TDirectoryFile* pass_wgantiidcr_all_e_Nominal pass_wgantiidcr_all_e_Nominal
KEY: TDirectoryFile pass_wgantiidcr_all_e_Nominal;1 pass_wgantiidcr_all_e_Nominal
KEY: TDirectoryFile plotEvent_Higgs;1 plotEvent_Higgs
KEY: TDirectoryFile plotEvent_Hyyd1;1 plotEvent_Hyyd1
KEY: TDirectoryFile plotEvent_Wy;1 plotEvent_Wy
KEY: TDirectoryFile plotEvent_zgamewk;1 plotEvent_zgamewk
root [4] plotEvent_Wy->cd()
(bool) true
root [5] .ls
TDirectoryFile* plotEvent_Wy plotEvent_Wy
KEY: TH1D w;1 w
KEY: TH1D wElEta;1 wElEta
KEY: TH1D wElPhi;1 wElPhi
KEY: TH1D wElPt;1 wElPt
KEY: TH1D wMuEta;1 wMuEta
KEY: TH1D wMuPhi;1 wMuPhi
KEY: TH1D wMuPt;1 wMuPt
root [7] wElEta->Draw()
Info in TCanvas::MakeDefCanvas: created default TCanvas with name c1
This gives me the desired plot but I am having trouble writing a macro that does all this and I do not have to keep typing this up over and over again I Tried the following,
TFile *f= new TFile(“output_IdId_Mc16a.root”);
f->ls()
TH1F h1= (TH1F)f->Get("/pass_wgantiidcr_all_e_Nominal/plotEvent_Wy/wElEta");
h1->Draw();
but it doesnt work any ideas ?

Try
TFile* f = TFile::Open("output_IdId_Mc16a.root");
if (f)
{
TH1F* h1 = static_cast<TH1F*>(f->Get("pass_wgantiidcr_all_e_Nominal/plotEvent_Wy/wElEta"));
if (!h1)
printf("No such histogram found!\n");
}
else
printf("No such file found!\n");

Related

Ramda - how to pass dynamic argument to function inside pipe

I am trying to add/use a variable inside the pipe to get the name of an object from a different object. Here is what I got so far:
I have an array of IDs allOutgoingNodes which I am using in the pipe.
Then I filter results using tableItemId property and then I am adding additional property externalStartingPoint and after that I would like to add name of tableItem from tableItems object to content -> html using concat.
const startingPointId = 395;
const allNodes = {
"818": {
"id": "818",
"content": {
"html": "<p>1</p>"
},
"outgoingNodes": [
"819"
],
"tableItemId": 395
},
"821": {
"id": "821",
"content": {
"html": "<p>4</p>"
},
"tableItemId": 396
}
}
const tableItems = {
"395": {
"id": "395",
"name": "SP1",
"code": "SP1"
},
"396": {
"id": "396",
"name": "SP2",
"code": "SP2"
}
}
const allOutgoingNodes = R.pipe(
R.values,
R.pluck('outgoingNodes'),
R.flatten
)(tableItemNodes);
const result = R.pipe(
R.pick(allOutgoingNodes),
R.reject(R.propEq('tableItemId', startingPointId)),
R.map(
R.compose(
R.assoc('externalStartingPoint', true),
SomeMagicFunction(node.tableItemId),
R.over(
R.lensPath(['content', 'html']),
R.concat(R.__, '<!-- Table item name should display here -->')
)
)
),
)(allNodes);
Here is a complete working example: ramda editor
Any help and suggestions on how to improve this piece of code will be appreciated.
Thank you.
Update
In the comments, OriDrori noted a problem with my first version. I didn't really understand one of the requirements. This version tries to address that issue.
const {compose, chain, prop, values, lensPath,
pipe, pick, reject, propEq, map, assoc, over} = R
const getOutgoing = compose (chain (prop('outgoingNodes')), values)
const htmlLens = lensPath (['content', 'html'])
const addName = (tableItems) => ({tableItemId}) => (html) =>
html + ` <!-- ${tableItems [tableItemId] ?.name} -->`
const convert = (tableItemNodes, tableItems, startingPointId) => pipe (
pick (getOutgoing (tableItemNodes)),
reject (propEq ('tableItemId', startingPointId)),
map (assoc ('externalStartingPoint', true)),
map (chain (over (htmlLens), addName (tableItems)))
)
const startingPointId = 395;
const tableItemNodes = {818: {id: "818", content: {html: "<p>1</p>"}, outgoingNodes: ["819"], tableItemId: 395}, 819: {id: "819", content: {html: "<p>2</p>"}, outgoingNodes: ["820"], tableItemId: 395}};
const tableItems = {395: {id: "395", name: "SP1", code: "SP1"}, 396: {id: "396", name: "SP2", code: "SP2"}}
const allNodes = {818: {id: "818", content: {html: "<p>1</p>"}, outgoingNodes: ["819"], tableItemId: 395}, 819: {id: "819", content: {html: "<p>2</p>"}, outgoingNodes: ["820"], tableItemId: 395}, 820: {id: "820", content: {html: "<p>3</p>"}, outgoingNodes: ["821"], tableItemId: 396}, 821: {id: "821", content: {html: "<p>4</p>"}, tableItemId: 396}}
console .log (
convert (tableItemNodes, tableItems, startingPointId) (allNodes)
)
.as-console-wrapper {max-height: 100% !important; top: 0}
<script src="//cdnjs.cloudflare.com/ajax/libs/ramda/0.27.1/ramda.min.js"></script>
As well as most of the comments on the version below still applying, we should also note that chain, when applied to functions acts like this:
chain (f, g) (x) //~> f (g (x)) (x)
So chain (over (htmlLens), addName (tableItems))
ends up being something like
(node) => over (htmlLens) (addName (tableItems) (node)) (node)
which in Ramda is equivalent to
(node) => over (htmlLens, addName (tableItems) (node), node)
which we then map over the nodes coming to it. (You can also see this in the Ramda REPL.)
Original Answer
It's not trivial to weave extra arguments through a pipeline because pipelines are designed for the simple purpose of passing a single argument down the line, transforming it at every step. There are of course techniques we could figure out for that, but I would expect them not to be worth the effort. Because the only thing they gain us would be the ability to write our code point-free. And point-free should not be a goal on its own. Use it when it makes your code simpler and more readable; skip it when it doesn't.
Instead, I would break this apart with some helper functions, and then write a main function that took our arguments and passed them as necessary to helper functions inside our main pipeline. Expand this snippet to see one approach:
const {compose, chain, prop, values, lensPath, flip, concat,
pipe, pick, reject, propEq, map, assoc, over} = R
const getOutgoing = compose (chain (prop ('outgoingNodes')), values)
const htmlLens = lensPath (['content', 'html'])
const addName = flip (concat) ('Table item name goes here')
const convert = (tableItemNodes, startingPointId) => pipe (
pick (getOutgoing (tableItemNodes)),
reject (propEq ('tableItemId', startingPointId)),
map (assoc ('externalStartingPoint', true)),
map (over (htmlLens, addName))
)
const startingPointId = 395;
const tableItemNodes = {818: {id: "818", content: {html: "<p>1</p>"}, outgoingNodes: ["819"], tableItemId: 395}, 819: {id: "819", content: {html: "<p>2</p>"}, outgoingNodes: ["820"], tableItemId: 395}};
const allNodes = {818: {id: "818", content: {html: "<p>1</p>"}, outgoingNodes: ["819"], tableItemId: 395}, 819: {id: "819", content: {html: "<p>2</p>"}, outgoingNodes: ["820"], tableItemId: 395}, 820: {id: "820", content: {html: "<p>3</p>"}, outgoingNodes: ["821"], tableItemId: 396}, 821: {id: "821", content: {html: "<p>4</p>"}, tableItemId: 396}}
console .log (
convert (tableItemNodes, startingPointId) (allNodes)
)
.as-console-wrapper {max-height: 100% !important; top: 0}
<script src="//cdnjs.cloudflare.com/ajax/libs/ramda/0.27.1/ramda.min.js"></script>
(You can also see this on the Ramda REPL.)
Things to note
I find compose (chain (prop ('outgoingNodes')), values) to be slightly simpler than pipe (values, pluck('outgoingNodes'), flatten), but they work similarly.
I often separate out the lens definitions even if I'm only going to use them once to make the call site cleaner.
There is probably no good reason to use Ramda in addName. This would work just as well: const addName = (s) => s + 'Table item name goes here' and is cleaner. I just wanted to show flip as an alternative to using the placeholder.
There is an argument to be made for replacing
map (assoc ('externalStartingPoint', true)),
map (over (htmlLens, addName))
with
map (pipe (
assoc ('externalStartingPoint', true),
over (htmlLens, addName)
))
as was done in the original. The Functor composition law states that they have the same result. And that requires one fewer iterations through the data. But it adds some complexity to the code that I wouldn't bother with unless a performance test pointed to this as a problem.
Before I saw your answer I managed to do something like in the example below:
return R.pipe(
R.pick(allOutgoingNodes),
R.reject(R.propEq('tableItemId', startingPointId)),
R.map((node: Node) => {
const startingPointName = allTableItems[node.tableItemId].name;
return R.compose(
R.assoc('externalStartingPoint', true),
R.over(
R.lensPath(['content', 'html']),
R.concat(
R.__,
`<p class='test'>See node in ${startingPointName}</p>`
)
)
)(node);
}),
R.merge(newNodesObject)
)(allNodes);
What do you think?

Remove subelement from a yaml array in bash/awk when there's no order?

I'm trying to find a better way of removing values from a yaml, for example - this is my yaml example:
apiVersion: v1
data:
mapRoles: |-
- username: user1
rolearn: arn
groups:
- grp
- grp2
- groups:
- grp
rolearn: arn
username: user2
kind: ConfigMap
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"..."}
uid: 93ad6dc1-2a1f-11ea-b5da-0ec0e91c7076
my input is list of user names, which i can test with regex or. as i cannot install any dependencies, i have to use a tool that is installed in any system - thats what i chose awk.
in each part, i have to check the username if it matches any list of values, then if it does - remove a specific group from the "groups:" list.
what i was thinking is to identify each start of a yaml key (that represents a user) - then, add everything to an array while checking if the username is exactly what we expect - if it does, print the array but without the relevant group, else - print the entire array.
i've started writing it and it seems complex - is there any better way?
--- examples ---
If i'm specifying "user1" and the "grp" as the params, yaml should look like:
apiVersion: v1
data:
mapRoles: |-
- username: user1
rolearn: arn
groups:
- grp2
- groups:
- grp
rolearn: arn
username: user2
kind: ConfigMap
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"..."}
uid: 93ad6dc1-2a1f-11ea-b5da-0ec0e91c7076
if i'm specifying user2 and the "grp", it should look like:
apiVersion: v1
data:
mapRoles: |-
- username: user1
rolearn: arn
groups:
- grp
- grp2
- groups:
rolearn: arn
username: user2
kind: ConfigMap
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"..."}
uid: 93ad6dc1-2a1f-11ea-b5da-0ec0e91c7076
That's my issue - basically the user2 is specified AFTER the groups section, which then i'm not sure about the correct way to remove it.
This might be what you're trying to do but its not clear from your question:
$ cat tst.awk
BEGIN {
split(users,tmp)
for (i in tmp) {
tgtUsers[tmp[i]]
}
split(groups,tmp)
for (i in tmp) {
tgtGroups[tmp[i]]
}
}
match($0,/^[[:space:]]*(-[[:space:]]*)?[^[:space:]]+[[:space:]]*:/) {
sect = $0
sub(/^[[:space:]]*(-[[:space:]]*)?/,"",sect)
sub(/[[:space:]]*:.*/,"",sect)
}
sect == "username" {
inTgtUsers = ($NF in tgtUsers)
inGroups = 0
}
sect == "groups" {
inGroups = 1
}
!(inGroups && inTgtUsers && ($NF in tgtGroups))
$ awk -v users='user1' -v groups='grp' -f tst.awk file
apiVersion: v1
data:
mapRoles: |-
- username: user1
rolearn: arn
groups:
- grp2
- groups:
rolearn: arn
username: user2
kind: ConfigMap
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"..."}
uid: 93ad6dc1-2a1f-11ea-b5da-0ec0e91c7076
So, by the note of Ed Morton, i've taken his script, but had to change the regex, as it didn't catch groups with colons in the name (such as - my:group)
BEGIN {
inGroups = 0
split(users,tmp)
for (i in tmp) {
tgtUsers[tmp[i]]
}
split(groups,tmp)
for (i in tmp) {
tgtGroups[tmp[i]]
}
}
match($0,/^ +(-? (username|rolearn|userarn): [^ ]+|-? groups: *$)/) {
sect = $0
sub(/^[[:space:]]*(-[[:space:]]*)?/,"",sect)
sub(/[[:space:]]*:.*/,"",sect)
if ($1 == "-") {
startIndexArr[NR]
start_index = NR
}
inGroups = 0
}
sect == "username" {
if ($NF in tgtUsers)
relevantUsersIndexArr[start_index]
}
sect == "groups" {
inGroups = 1
}
(inGroups == 1 && ($NF in tgtGroups)) {
foundGroupsArr[NR]
}
{
yamlArr[NR] = $0
}
END {
row_num = 1
for (i in yamlArr) {
if (row_num in startIndexArr)
start_entity_index = row_num
print_row = 1
if (row_num in foundGroupsArr && start_entity_index in relevantUsersIndexArr)
print_row = 0
if (print_row == 1)
print yamlArr[row_num]
row_num++
}
}

How to compare 2 JSON objects containing array using Karate tool [duplicate]

This question already has an answer here:
Is there a simple match for objects containing array where the array content order doesn't matter?
(1 answer)
Closed 1 year ago.
One of the API testing using intuit/karate,
Expected JSON is: {name: hello,
config:[{username: abc, password: xyz},{username: qwe, password: tyu}]}
There is two possibility of an API response.
First possible actual JSON: {name: hello,
config:[{username: qwe, password: tyu},{username: abc, password: xyz}]}
Second possible actual JSON: {name: hello,
config:[{username: abc, password: xyz},{username: qwe, password: tyu}]}
Likewise, the sequence of array element is different in actual response, hence following approach of validation of response throws error randomly.
And response == < ExpectedResponse >
And response contains < ExpectedResponse >
Sometimes error is thrown as :
Error : { Actual: response.config[0].abc, Expected: response.config[0].qwe }
Sometimes error is thrown as :
Error : { Actual: response.config[0].qwe, Expected: response.config[0].abc }
Would you please provide exact karate approach of JSON validation in which entire JSON along with ignore the sequence of element in JSON containing array ?
Here is the solution:
* def response1 = {name: 'hello', config:[{username: 'qwe', password: 'tyu'},{username: 'abc', password: 'xyz'}]}
* def response2 = {name: 'hello', config:[{username: 'abc', password: 'xyz'},{username: 'qwe', password: 'tyu'}]}
* def config = [{username: 'qwe', password: 'tyu'},{username: 'abc', password: 'xyz'}]
* match response1 == { name: 'hello', config: '#(config)' }
* match response2 == { name: 'hello', config: '#(^^config)' }

Arangodb dynamic index on object keys

Arangodb 2.8b3
Have document with some property "specification", can have 1-100 keys inside, like
document {
...
specification: {
key1: "value",
...
key10: "value"
}
}
Task fast query by specification.key
For Doc IN MyCollection FILTER Doc.specification['key1'] == "value" RETURN Doc
Tried create hash indexes with field: "specification", "specification.*", specification[*], specification[*].*
Index never used, any solution without reorganizing structure or plans for future exists?
No, we currently don't have any smart idea how to handle indices for structures like that. The memory usage would also increase since the attribute names would also have to be present in the index for each indexed value.
What we will release with 2.8 is the ability to use indices on array structures:
db.posts.ensureIndex({ type: "hash", fields: [ "tags[*]" ] });
with documents like:
{ tags: [ "foobar", "bar", "anotherTag" ] }
Using AQL queries like this:
FOR doc IN posts
FILTER 'foobar' IN doc.tags[*]
RETURN doc
You could also index documents under arrays:
db.posts.ensureIndex({ type: "hash", fields: [ "tags[*].value" ] });
db.posts.insert({
tags: [ { key: "key1", value: "foobar"},
{ key: "key2", value: "baz" },
{ key: "key3", value: "quux" }
] });
The following query will then use the array index:
FOR doc IN posts
FILTER 'foobar' IN doc.tags[*].value
RETURN doc
However, the asterisk can only be used for array accesses - it can't substitute key matches in objects.

Problems converting a INI based configuration into YAML

Greetings to all.
I am working on a project that uses Zend_Config to create forms. I am working on broadening my knowledge base and have hit a snag.
I have a form config file in ini format that works fine. I would like to convert that form configuration into a YAML based file. I attempted to write the conversion myself, and though I accounted for everything. As this is my first journey into yaml, I need help to see what is wrong.
The ini file that works is here:
[production]
;General From Meta Data
logon.form.action = "/customers/plogin"
logon.form.method="post"
logon.form.id="loginform"
;Form Element Prefix Data
logon.form.elementPrefixPath.decorator.prefix = "Elite_Decorator_"
logon.form.elementPrefixPath.decorator.path = "Elite/Decorator/"
logon.form.elementPrefixPath.decorator.type = "decorator"
logon.form.elementPrefixPath.validate.prefix = "Elite_Validate_"
logon.form.elementPrefixPath.validate.path = "Elite/Validate/"
logon.form.elementPrefixPath.validate.type = "validate"
;Form Element - email
logon.form.elements.email.type = "text"
logon.form.elements.email.options.required = "true"
logon.form.elements.email.options.label = "Email"
logon.form.elements.email.options.decorators.composite.decorator = "Composite"
logon.form.elements.email.options.validators.strlen.validator = "StringLength"
logon.form.elements.email.options.validators.strlen.options.min="2"
logon.form.elements.email.options.validators.strlen.options.max="50"
;Form Element - Password
logon.form.elements.password.type = "password"
logon.form.elements.password.options.required = "true"
logon.form.elements.password.options.label = "Password"
logon.form.elements.password.options.decorators.composite.decorator = "Composite"
logon.form.elements.password.options.validators.strlen.validator = "StringLength"
logon.form.elements.password.options.validators.strlen.options.min="2"
logon.form.elements.password.options.validators.strlen.options.max="20"
;Form Element - Submit
logon.form.elements.submit.type = "submit"
logon.form.elements.submit.options.label = "Logon"
;Form Display Group 1
logon.form.displaygroups.group1.name = "logon"
logon.form.displaygroups.group1.options.legend = "Please Login to your Account"
logon.form.displaygroups.group1.options.decorators.formelements.decorator = "FormElements"
logon.form.displaygroups.group1.options.decorators.fieldset.decorator = "Fieldset"
logon.form.displaygroups.group1.options.decorators.fieldset.options.style = "width:375px;"
logon.form.displaygroups.group1.elements.email = "email"
logon.form.displaygroups.group1.elements.password = "password"
logon.form.displaygroups.group1.elements.submit = "submit"
And my YAML translation:
production:
logon:
form:
action: /customers/plogin
method: post
id: loginform
elementPrefixPath:
decorator:
prefix: Elite_Decorator_
path: Elite/Decorator/
type: decorator
validate:
prefix: Elite_Validate_
path: Elite/Validate/
type: validate
elements:
email:
type: text
options:
required: true
label: Email
decorators:
composite:
decorator: Composite
validators:
strlen:
validator: StringLength
options:
min: 2
max: 50
password:
type: text
options:
required: true
label: Password
decorators:
composite:
decorator: Composite
validators:
strlen:
validator: StringLength
options:
min: 2
max: 20
submit:
type: submit
options:
label: Logon
displaygroups:
group1:
name: logon
options:
legend: Please login to your account
decorators:
formelements:
decorator: FormElements
fieldset:
decorator: Fieldset
options:
style: width:375px;
elements:
email: email
password: password
submit: submit
The YAML based form only gives me a blank page. Upon investigation, none of the form markup is included in the page that is output. Any help would be greatly appreciated.
Regards,
Troy
I think you should have better indentation in your code:
production:
logon:
form:
action: /customers/plogin
method: post
....