I have a file named “WW” that contains a tree named “mini” that contains leaves with information.
I want to write a script that reads this file and then plots multiple leaves into one histogram.
I have tried:
TFile *WW = new TFile("WW.root");
TTree* tWW = (TTree*) WW->Get("mini");
TCanvas *c1 = new TCanvas("c1",600,600);
c1->cd();
tWW -> Draw("leaf1");
tWW -> Draw ("leaf2","same");
This only plots one of the leaves, and not the second one on the same histo.
Thanks
Related
I saw a lot of tutorials about how to load csv (Gremlin) data in the format of vertices and edges into AWS Neptune. For a lot of reasons, I cannot create vertices and edges for data loading. Instead I have just the raw csv file where each row is a record (e.g. a person).
How can I create nodes and relationships from each row of record from the raw csv in Neptune from the notebook interface?
Given you mentioned wanting to do this in the notebooks, the examples below are all run from inside a Jupyter notebook. I don't have the data sets you mentioned to hand, so let's make a simple one in a Notebook cell using.
%%bash
echo "code,city,region
AUS,Austin,US-TX
JFK,New York,US-NY" > test.csv
We can then generate the openCypher CREATE steps for the nodes contained in that CSV file using a simple cell such as:
import csv
with open('test.csv', newline='') as csvfile:
reader = csv.DictReader(csvfile, escapechar="\\")
query = ""
for row in reader:
s = "CREATE (:Airport {"
for k in row:
s += f'{k}:"{row[k]}", '
s = s[:-2] + '})\n'
query += s
print(query)
Which yields
CREATE (:Airport {code:"AUS", city:"Austin", region:"US-TX"})
CREATE (:Airport {code:"JFK", city:"New York", region:"US-NY"})
Finally let's have the notebook oc cell magic run that query for us
ipython = get_ipython()
magic = ipython.run_cell_magic
magic(magic_name = "oc", line='', cell=query)
To verify that the query worked
%%oc
MATCH (a:Airport)
RETURN a.code, a.city
which returns:
a.code a.city
1 AUS Austin
2 JFK New York
There are many ways you could do this, but this is a simple way if you want to stay inside the notebooks. Given your question does not have a lot of detail or an example of what you have tried so far, hopefully this gives you some pointers.
I’m stuck with a problem using Pyroot. I’m not able to read a leaf on a tree which is a two dimensional array of float values. You can see the related Tree in the following:
root [1] TTree tr=(TTree)g->Get(“tevent_2nd_integral”)
root [2] tr.Print()
*Tree :tevent_2nd_integral: Event packet tree 2nd GTUs integral *
*Entries : 57344 : Total = 548967602 bytes File Size = 412690067 *
: : Tree compression factor = 1.33 *
*Br 7 :photon_count_data : photon_count_data[1][1][48][48]/F *
*Entries : 57344 : Total Size= 530758073 bytes File Size = 411860735 *
*Baskets : 19121 : Basket Size= 32000 bytes Compression= 1.29 *
…
The array (the bold one) is photon_count_data[1][1][48][48]. Actually i have several root files and I tried both to make a chain and to use hadd method like hadd file.root 'ls /path/.root’.*
I tried several ways as i will show soon. Anytime i found different problem: once the numpy array which should contain the 48x48 values per each event was not created at all, others just didn’t write anything or strange values (negative also which is not possible).
My code is the following:
# calling the root file after using hadd to merge all files
rootFile = path+"merge.root"
f = XROOT.TFile(rootFile,'read')
tree = f.Get('tevent_2nd_integral')
# making a chain
PDMchain=TChain("tevent_2nd_integral")
for filename in sorted(os.listdir(path)):
if filename.endswith('.root') and("CPU_RUN_MAIN" in filename) :
PDMchain.Add(filename)
pdm_counts = []
#First method using python pyl class
leaves = tree.GetListOfLeaves()
# define dynamically a python class containing root Leaves objects
class PyListOfLeaves(dict) :
pass
# create an istance
pyl = PyListOfLeaves()
for i in range(0,leaves.GetEntries() ) :
leaf = leaves.At(i)
name = leaf.GetName()
# add dynamically attribute to my class
pyl.__setattr__(name,leaf)
for iev in range(0,nEntries_pixel) :
tree.GetEntry(iev)
pdm_counts.append(pyl.photon_count_data.GetValue())
# the Draw method
count = tree.Draw("photon_count_data","","")
pdm_counts.append(np.array(np.frombuffer(tree.GetV1(), dtype=np.float64, count=count)))
#ROOT buffer method
for event in PDMchain:
pdm_data_for_this_event = event.photon_count_data
pdm_data_for_this_event.SetSize(2304) #ROOT buffer
pdm_couts.append(np.array(pdm_data_for_this_event,copy=True))
with the python class method the array pdm_counts is filled with just the first element contained in photon_count_data
with the Draw method I get a segmentation violation or a strange kernel issue
with the root buffer method I get right back a list containing all the 2304 (48x48) values but they are completely different from those in the photon_count_data, id est, negative values or orders of magnitude senseless
Could you tell me where I’m wrong or if there could be a more elegant and quick method to do so.
Thanks in advance
actually I found the solution and I would like to share it if anytime someone will need it!
Actually the third method explained
for event in PDMchain:
pdm_data_for_this_event = event.photon_count_data
pdm_data_for_this_event.SetSize(2304) #ROOT buffer
pdm_couts.append(np.array(pdm_data_for_this_event,copy=True))
works, but unfortunately I was using Spyder to visualize data and for some reason it return strange values which are not right! So...don't use Spyder!!!
Moreover another method works fine:
from root_pandas import read_root
data = read_root('merge.root', 'tevent_2nd_integral', columns=['cpu_packet_time', 'photon_count_data'])
Cheers!
In this post, they explain how to generate a fits file from ascii file. However, I also would like to know how to define header and data into fits file. (Converting ASCII Table to FITS image)
For example, when I call a spectral fits file with astropy (which is downloaded from a telescope), I can call data and header separately.
I.E
In [1]:hdu = fits.open('observation.fits', memmap=True)
In [2]:header = hdu[0].header
In [3]:header
Out [3]:
SIMPLE = T / conforms to FITS standard
BITPIX = 8
NAXIS = 1
NAXIS1 = 47356
EXTEND = T
DATE = 'date' / file creation date (YYYY-MM-DDThh:mm:ss UT)
ORIGIN = 'XXX ' / European Southern Observatory
TELESCOP= 'XXX' / ESO Telescope Name
INSTRUME= 'Instrument' / Instrument used.
OBJECT = 'ABC ' / Original target.
RA = 30.4993 / xx:xx:xx.x RA (J2000) pointing
DEC = -20.0009 / xx:xx:xx.x DEC (J2000) pointing
CTYPE1 = 'WAVE ' / wavelength axis in nm
CRPIX1 = 0. / Reference pixel in z
CRVAL1 = 298.903594970703 / central wavelength
CDELT1 = 0.0199999995529652 / nm per pixel
CUNIT1 = 'nm ' / spectral unit
..
bla bla
..
END
In [3]:data = hdu[0].data
In [4]:data
Out [4]:array([ 1000, 1001, 1002, ...,
5.18091546e-13, 4.99434453e-13, 4.91280864e-13])
Lets assume, I have data like below
WAVE FLUX
1000 2.02e-12
1001 3.03e-12
1002 4.04e-12
..
bla bla
..
So, I'd like to generate a spectral fits file with my own data (with its own header).
Mini question : Now lets assume, I generate spectral fits file correctly, but I realised that I forgot to take logarithm of WAVE values in X axis (1000, 1001, 1002, ....) . How can I do that without touching FLUX values of Y-axis (2.02e-12, 3.03e-13, 4.04e-13) ?
FITS files are organized as one or more HDUs (Header Data Units) consisting, as the name suggests, as one data object (generally, a single array for an observation, though sometimes something else like a table), and the header of metadata that goes with that data.
To create a file from scratch, especially an image, the simplest way is to directly create an ImageHDU object:
>>> from astropy.io import fits
>>> hdu = fits.ImageHDU()
Just as with an HDU read from an existing file, this HDU has a (mostly empty) header, and an empty data attribute that you can then assign to:
>>> hdu.data = np.array(<some numpy array>)
>>> hdu.header['TELESCOP'] = 'Gemini'
When you're satisfied you can write the HDU out to a file with:
>>> hdu.writeto('filename.fits')
(Note: A lot of the documentation you'll see demonstrates a more complex process of creating an HDUList object, appending the HDU to the HDU list, and then writing the full HDU list. This is only necessary if you're creating a multi-extension FITS file. For a single HDU, you can use hdu.writeto directly and the framework will handle the other structural details.)
In general you don't need to manipulate the headers that describe the format of the data itself--that is automatic and should not be touched by hand (FITS has the unfortunate misfeature of mixing information about data structure with actual metadata). You can see more examples on how to manipulate FITS data here: http://docs.astropy.org/en/stable/generated/examples/index.html#astropy-io
Your other question pertains to manipulating the WCS (World Coordinate System) of the image, and in particular for spectral data this can be non-trivial. I would ask a separate question about that with more details about what you hope to accomplish.
I am working on an assignment where I need to draw a line between points on a map. I built it up using if statements, but that limits how many times I can do something so want to use a loop.
The code I have so far is below, I can add the cities and draw a line, but it draws a line to itself rather than from a start point to the next point (and continue on)
Can anybody please help at all?
def level1():
cityXvalue = [45,95,182,207,256,312,328,350,374,400]
cityYvalue = [310,147,84,201,337,375,434,348,335,265]
# Display the map image
map = makePicture(getMediaPath("map.png"))
show(map)
number = 0
# Ask user to enter numbers of the cities they wish to visit
cities = requestInteger("Enter the number of the city you would like to visit")
# Draw a line between previously entered point and current one
while cities > number:
city = requestInteger("Please choose a city number")
addLine(map,cityXvalue[city],cityYvalue[city], cityXvalue[city], cityYvalue[city])
repaint(map)
I receive data in the form
id1|attribute1a,attribute1b|attribute2a|attribute3a,attribute3b,attribute3c....
id2||attribute2b,attribute2c|..
I'm trying to merge it all into a form where I just have a bag of tuples of an id field followed by a tuple containing a list of all my other fields merged together.
(id1,(attribute1a,attribute1b,attribute2a,attribute3a,attribute3b,attribute3c...))
(id2,(attribute2b,attribute2c...))
Currently I fetch it like
my_data = load '$input' USING PigStorage(|) as
(id:chararray, attribute1:chararray, attribute2:chararray)...
then I've tried all combinations of FLATTEN, TOKENIZE, GENERATE, TOTUPLE, BagConcat, etc. to massage it into the form I want, but I'm new to pig and just can't figure it out. Can anyone help? Any open source UDF libraries are fair game.
Load each line as an entire string, and then use the features of the built-in STRPLIT UDF to achieve the desired result. This relies on there being no tabs in your list of attributes, and assumes that | and , are not to be treated any differently in separating out the different attributes. Also, I modified your input a little bit to show more edge cases.
input.txt:
id1|attribute1a,attribute1b|attribute2a|,|attribute3a,attribute3b,attribute3c
id2||attribute2b,attribute2c,|attribute4a|,attribute5a
test.pig:
my_data = LOAD '$input' AS (str:chararray);
split1 = FOREACH my_data GENERATE FLATTEN(STRSPLIT(str, '\\|', 2)) AS (id:chararray, attr:chararray);
split2 = FOREACH split1 GENERATE id, STRSPLIT(attr, '[,|]') AS attributes;
DUMP split2;
Output of pig -x local -p input=input.txt test.pig:
(id1,(attribute1a,attribute1b,attribute2a,,,attribute3a,attribute3b,attribute3c))
(id2,(,attribute2b,attribute2c,,attribute4a,,attribute5a))