Error reading histogram from root file - root-framework

I am reading a root file whose structure looks like:
$ root -l Residual_Position_iter_99_R1869.root
.lroot [0]
Attaching file Residual_Position_iter_99_R1869.root as _file0...
s
Warning in <TClass::TClass>: no dictionary for class TF1Parameters is available
root [1] .ls
TFile** Residual_Position_iter_99_R1869.root
TFile* Residual_Position_iter_99_R1869.root
KEY: TH1F Pos_g3xcl_100;1
KEY: TH1F Pos_g3ycl_100;1
KEY: TH1F Pos_g2xcl_100;1
KEY: TH1F Pos_g2ycl_100;1
KEY: TH1F Pos_g1xcl_100;1
KEY: TH1F residual_g1xcl_100;1
KEY: TH1F residual_g1ycl_100;1
KEY: TH1F residual_g2xcl_100;1
KEY: TH1F residual_g2ycl_100;1
KEY: TH1F residual_g3xcl_100;1
KEY: TH1F residual_g3ycl_100;1
To read this I wrote a macro :
import ROOT
from ROOT import TFile, TH1F, TObject
ROOT.gROOT.SetBatch(True) # This will prevent histogram to show
c=ROOT.TCanvas("c","c",800,600)
f1=ROOT.TFile("Residual_Position_iter_99_R1869.root","READ")
h1x=f1.Get("Pos_g1xcl_100"); c.cd(); h1x.Draw()
c.Print("plots/residual.pdf")
This code is working fine. But its fine only for branches of like Pos*. But if I replace
h1x=f1.Get("Pos_g1xcl_100");
with
h1x=f1.Get("residual_g1xcl_100");
then I am getting the segmentation fault[1]. The difference between Pos_g1xcl_100 & residual_g1xcl_100 is that one is simple histogram while later one is histogram along with its fit.
[1]
TClass::TClass:0: RuntimeWarning: no dictionary for class TF1Parameters is available
TStreamerInfo::BuildOld:0: RuntimeWarning: Cannot convert TF1::fParErrors from type:vector to type:Double_t*, skip element
TStreamerInfo::BuildOld:0: RuntimeWarning: Cannot convert TF1::fParMin from type:vector to type:Double_t*, skip element
TStreamerInfo::BuildOld:0: RuntimeWarning: Cannot convert TF1::fParMax from type:vector to type:Double_t*, skip element
TStreamerInfo::BuildOld:0: RuntimeWarning: Cannot convert TF1::fSave from type:vector to type:Double_t*, skip element
TStreamerInfo::BuildOld:0: RuntimeWarning: Cannot convert TF1::fParams from type:TF1Parameters* to type:Double_t*, skip element
* Break * segmentation violation

Related

Is there any way to find a string from a table?

I don't know if there's any way to find a string from a table in Lua that matches the argument for example.
function Library.Instance(object,name)
local Explorer = {}
local Storage = {
"fart",
}
if Storage == ((theobjectname)) then
print(object)
end
end
You have to implement it by yourself.
Here is my version of a tsearch()...
-- tsearch.lua
local magictab = {}
function magictab.tsearch(self, searchfor, replace)
local searchfor, replace, self = searchfor or 'Lua', replace or '<%1>', self or _G
for key, value in pairs(self) do
if type(value) == 'string' then
local found, count = value:gsub(searchfor, replace)
if count > 0 then
print('Key:', key, 'Found:', found)
end
end
end
end
-- Creating Key/Value Pairs
magictab['A Key With Spaces'] = 'Ghostbuster Goes Ghost Busting'
magictab[#magictab + 1] = 'Another Day - Another Table'
magictab[0] = 'Here I am - The Ghost Within Machine'
-- magictab.tsearch() -- Using defaults
return magictab
An impression typed in Lua Standalone...
$ /usr/local/bin/lua
Lua 5.4.3 Copyright (C) 1994-2021 Lua.org, PUC-Rio
> t=require('tsearch')
> t.tsearch()
Key: _VERSION Found: <Lua> 5.4
> t:tsearch('Ghost')
Key: A Key With Spaces Found: <Ghost>buster Goes <Ghost> Busting
Key: 0 Found: Here I am - The <Ghost> Within Machine
> t:tsearch('%u%l+','*%1*')
Key: 1 Found: *Another* *Day* - *Another* *Table*
Key: A Key With Spaces Found: *Ghostbuster* *Goes* *Ghost* *Busting*
Key: 0 Found: *Here* I am - *The* *Ghost* *Within* *Machine*
And...
This is only one way.
...there are many.
By the way...
Above tsearch() gives arguments 1:1 to gsub().
Therefore it could be a usefull training function for pattern matching and replacement checks.
For example: Do you know the pattern %f[] is usefull for?
Or: What is a replacement function?
> t:tsearch('%f[%u+%l+]',function(match_not_used_here) return '>>' end)
Key: 1 Found: >>Another >>Day - >>Another >>Table
Key: 0 Found: >>Here >>I >>am - >>The >>Ghost >>Within >>Machine
Key: A Key With Spaces Found: >>Ghostbuster >>Goes >>Ghost >>Busting
> t:tsearch('%f[^%u+%l+]',function(match_not_used_here) return '>>' end)
Key: 1 Found: Another>> Day>> - Another>> Table>>
Key: 0 Found: Here>> I>> am>> - The>> Ghost>> Within>> Machine>>
Key: A Key With Spaces Found: Ghostbuster>> Goes>> Ghost>> Busting>>

Spark Scala Convert Sparse to Dense Feature

I have the following output that shows the DataFrame where I'm trying to OneHotEncode a String DataType:
+---------+--------+------------------+-----------+--------------+----------+----------+-------------+------------------+---------------+-------------+
|longitude|latitude|housing_median_age|total_rooms|total_bedrooms|population|households|median_income|median_house_value|ocean_proximity| feature|
+---------+--------+------------------+-----------+--------------+----------+----------+-------------+------------------+---------------+-------------+
| -122.28| 37.81| 52.0| 340.0| 97.0| 200.0| 87.0| 1.5208| 112500.0| [NEAR BAY]|(5,[3],[1.0])|
| -122.13| 37.67| 40.0| 1748.0| 318.0| 914.0| 317.0| 3.8676| 184000.0| [NEAR BAY]|(5,[3],[1.0])|
| -122.07| 37.67| 27.0| 3239.0| 671.0| 1469.0| 616.0| 3.2465| 230600.0| [NEAR BAY]|(5,[3],[1.0])|
| -122.13| 37.66| 19.0| 862.0| 167.0| 407.0| 183.0| 4.3456| 163000.0| [NEAR BAY]|(5,[3],[1.0])|
As it can be seen that I have the feature calculated from the ocean_proximity column. I now want to expand on this feature column and have that as a dense vector and for that I tried something like this:
import org.apache.spark.ml.Pipeline
import org.apache.spark.ml.feature.{StringIndexer, OneHotEncoder}
import org.apache.spark.ml.feature.CountVectorizer
import org.apache.spark.mllib.linalg.Vector
import spark.implicits._
// Identify how many distinct values are in the OCEAN_PROXIMITY column
val distinctOceanProximities = dfRaw.select(col("ocean_proximity")).distinct().as[String].collect()
val oceanProximityAsArrayDF = dfRaw.withColumn("ocean_proximity", array("ocean_proximity"))
val countModel = new CountVectorizer().setInputCol("ocean_proximity").setOutputCol("feature").fit(oceanProximityAsArrayDF)
val transformedDF = countModel.transform(oceanProximityAsArrayDF)
transformedDF.show()
def columnExtractor(idx: Int) = udf((v: Vector) => v(idx))
val featureCols = (0 until distinctOceanProximities.size).map(idx => columnExtractor(idx)($"feature").as(s"$distinctOceanProximities(idx)"))
val toDense = udf((v:Vector) => v.toDense)
val denseDF = transformedDF.withColumn("feature", toDense($"feature"))
denseDF.show()
This however fails with the following message:
org.apache.spark.sql.AnalysisException: Cannot up cast `input` from struct<type:tinyint,size:int,indices:array<int>,values:array<double>> to struct<type:tinyint,size:int,indices:array<int>,values:array<double>>.
The type path of the target object is:
- root class: "org.apache.spark.mllib.linalg.Vector"
You can either add an explicit cast to the input data or choose a higher precision type of the field in the target object
at org.apache.spark.sql.errors.QueryCompilationErrors$.upCastFailureError(QueryCompilationErrors.scala:137)
at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveUpCast$.org$apache$spark$sql$catalyst$analysis$Analyzer$ResolveUpCast$$fail(Analyzer.scala:3438)
at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveUpCast$$anonfun$apply$36$$anonfun$applyOrElse$198.applyOrElse(Analyzer.scala:3467)
at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveUpCast$$anonfun$apply$36$$anonfun$applyOrElse$198.applyOrElse(Analyzer.scala:3445)
at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDown$1(TreeNode.scala:318)
at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:74)
at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:318)
at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDown$3(TreeNode.scala:323)
at org.apache.spark.sql.catalyst.trees.TreeNode.mapChild$2(TreeNode.scala:377)
at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$mapChildren$4(TreeNode.scala:438)
at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:238)
at scala.collection.immutable.List.foreach(List.scala:392)
at scala.collection.TraversableLike.map(TraversableLike.scala:238)
at scala.collection.TraversableLike.map$(TraversableLike.scala:231)
at scala.collection.immutable.List.map(List.scala:298)
at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$mapChildren$1(TreeNode.scala:438)
at org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:244)
at org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:406)
at org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:359)
at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:323)
at org.apache.spark.sql.catalyst.plans.QueryPlan.$anonfun$transformExpressionsDown$1(QueryPlan.scala:94)
at org.apache.spark.sql.catalyst.plans.QueryPlan.$anonfun$mapExpressions$1(QueryPlan.scala:116)
at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:74)
at org.apache.spark.sql.catalyst.plans.QueryPlan.transformExpression$1(QueryPlan.scala:116)
at org.apache.spark.sql.catalyst.plans.QueryPlan.recursiveTransform$1(QueryPlan.scala:127)
at org.apache.spark.sql.catalyst.plans.QueryPlan.$anonfun$mapExpressions$4(QueryPlan.scala:137)
at org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:244)
at org.apache.spark.sql.catalyst.plans.QueryPlan.mapExpressions(QueryPlan.scala:137)
at org.apache.spark.sql.catalyst.plans.QueryPlan.transformExpressionsDown(QueryPlan.scala:94)
at org.apache.spark.sql.catalyst.plans.QueryPlan.transformExpressions(QueryPlan.scala:85)
at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveUpCast$$anonfun$apply$36.applyOrElse(Analyzer.scala:3445)
at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveUpCast$$anonfun$apply$36.applyOrElse(Analyzer.scala:3441)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.$anonfun$resolveOperatorsUp$3(AnalysisHelper.scala:90)
at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:74)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.$anonfun$resolveOperatorsUp$1(AnalysisHelper.scala:90)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:221)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsUp(AnalysisHelper.scala:86)
It was so annoying and the actual error was caused by using the wrong imports:
Instead of:
import org.apache.spark.mllib.linalg.Vector
Using this:
import org.apache.spark.ml.linalg.Vector
Solved the issue!

Using pycryptodome or cryptography in python 3.6. How to achieve this?

Imagine the following message:
"This is a message to the signed"
I have the need to sign this sample message using "pycryptodome" or "cryptography" in Python 3.6 with the following standards:
Format: x.509;
Charset: UTF-8;
Encoding: Base64;
PKCS1 v1.5;
Size: 1024 bits;
Message format: SHA-1;
I have the required "privatekey.pem" but I do not know how to do do it in pycryptodome or cryptography.
UPDATED:
I have found this sample code but still not know if it is the correct way to achieve what I need based on standards defined on the original message. The sample code (for pycryptodome):
from Crypto.PublicKey import RSA
from Crypto.Signature import PKCS1_v1_5
from Crypto.Hash import SHA1
import base64
from base64 import b64encode, b64decode
key = open('privatekey.pem', "r").read()
rsakey = RSA.importKey(key)
signer = PKCS1_v1_5.new(rsakey)
digest = SHA1.new()
data = 'This the message to be signed'
digest.update(b64decode(data))
sign = signer.sign(digest)
doc = base64.b64encode(sign)
print(doc)
I can see that I get a 172 characters signature hash but need professional advise to know if this meets the standards I described and if it is the correct way of doing it.
Here is a code snippet, adapted from the applicable documentation page:
from Crypto.Signature import pkcs1_15
from Crypto.Hash import SHA1
from Crypto.PublicKey import RSA
from base64 import b64encode
message = 'This the message to be signed'
key = RSA.import_key(open('private_key.der').read())
h = SHA1.new(message)
signature = pkcs1_15.new(key).sign(h)
print(b64encode(signature))

Aerospike: zlib/bz2 store and retrieve didnt worked

I am compressing a string using zlib, then storing in Aerospike bin. On retrieval and decompressing, I am getting "zlib.error: Error -5 while decompressing data: incomplete or truncated stream"
When I compared original compressed data and retrieved compressed data, some thing is missing at the end in retrieved data.
I am using Aerospike 3.7.3 & python client 2.0.1
Please help
Thanks
Update: Tried using bz2. Throws ValueError: couldn't find end of stream while retrieve and decompress. Looks like aerospike is stripping of the last byte or something else from the blob.
Update: Posting the code
import aerospike
import bz2
config = {
'hosts': [
( '127.0.0.1', 3000 )
],
'policies': {
'timeout': 1000 # milliseconds
}
}
client = aerospike.client(config)
client.connect()
content = "An Aerospike Query"
content_bz2 = bz2.compress(content)
key = ('benchmark', 'myset', 55)
#client.put(key, {'bin0':content_bz2})
(key, meta, bins) = client.get(key)
print bz2.decompress(bins['bin0'])
Getting Following Error:
Traceback (most recent call last):
File "asread.py", line 22, in <module>
print bz2.decompress(bins['bin0'])
ValueError: couldn't find end of stream
The bz.compress method returns a string, and the client sees that type and tries to convert it to the server's as_str type. If it runs into a \0 in an unexpected position it will truncate the string, causing your error.
Instead, make sure to cast binary data to a bytearray, which the client converts to the server's as_bytes type. On the read operation, bz.decompress will work with the bytearray data and give you back the original string.
from __future__ import print_function
import aerospike
import bz2
config = {'hosts': [( '33.33.33.91', 3000 )]}
client = aerospike.client(config)
client.connect()
content = "An Aerospike Query"
content_bz2 = bytearray(bz2.compress(content))
key = ('test', 'bytesss', 1)
client.put(key, {'bin0':content_bz2})
(key, meta, bins) = client.get(key)
print(type(bins['bin0']))
bin0 = bz2.decompress(bins['bin0'])
print(type(bin0))
print(bin0)
Gives back
<type 'bytearray'>
<type 'str'>
An Aerospike Query

read wsgi post data with unicode encoding

How can i read wsgi POST with unicode encoding,
this is part of my code :
....
request_body_size = int(environ.get('CONTENT_LENGTH', 0))
req = str(environ['wsgi.input'].read(request_body_size))
and from req i read my fileds,
this is what i posted :
کلمه
and this is what i read it from inside of py code:
b"%DA%A9%D9%84%D9%85%D9%87"
This is a byte string but i can't convert it or read it ,
I use encode and decode methods but none of these are not worked .
I use python3.4 and wsgi and mod_wsgi(apache2).
I use urllib module of python, with this code and worked :
fm = urllib.parse.parse_qs(request_body['family'].encode().decode(),True) # return a dictionary
familyvalue = str([k for k in fm.keys()][0]) # access to first item
is this a right way ?