I'm trying to modify one of my existing scripts that uses uproot to read data from a root file into a pandas dataframe using uproot.pandas.iterate
. Currently it only reads branches containing simple data types (floats, ints, bools), but I would like to add the ability to read some branches that store 3x3 matrices. I understand from looking at the readme that, in cases like this, it's recommended to flatten the structure by passing flatten=True
as an argument to the iterate function. However, when I do this, it crashes:
Traceback (most recent call last):
File "genPreselTuples.py", line 338, in <module>
data = read_events(args.decaymode, args.tag, args.year, args.polarity, chunk=args.chunk, numchunks=args.numchunks, verbose=args.verbose, testing=args.testing)
File "genPreselTuples.py", line 180, in read_events
for df in uproot.pandas.iterate(filename_list, treename, branches=list(branchdict.keys()), entrysteps=100000, namedecode='utf-8', flatten=True):
File "/afs/cern.ch/work/d/djwhite/miniconda3/envs/D02HHHHml/lib/python3.8/site-packages/uproot/tree.py", line 117, in iterate
for start, stop, arrays in tree.iterate(branches=branchesinterp, entrysteps=entrysteps, outputtype=outputtype, namedecode=namedecode, reportentries=True, entrystart=0, entrystop=tree.numentries, flatten=flatten, flatname=flatname, awkwardlib=awkward, cache=cache, basketcache=basketcache, keycache=keycache, executor=executor, blocking=blocking):
File "/afs/cern.ch/work/d/djwhite/miniconda3/envs/D02HHHHml/lib/python3.8/site-packages/uproot/tree.py", line 721, in iterate
out = out()
File "/afs/cern.ch/work/d/djwhite/miniconda3/envs/D02HHHHml/lib/python3.8/site-packages/uproot/tree.py", line 678, in <lambda>
return lambda: uproot._connect._pandas.futures2df([(branch.name, interpretation, wrap_again(branch, interpretation, future)) for branch, interpretation, future, past, cachekey in futures], outputtype, start, stop, flatten, flatname, awkward)
File "/afs/cern.ch/work/d/djwhite/miniconda3/envs/D02HHHHml/lib/python3.8/site-packages/uproot/_connect/_pandas.py", line 162, in futures2df
array = array.view(awkward.numpy.dtype([(str(i), array.dtype) for i in range(functools.reduce(operator.mul, array.shape[1:]))])).reshape(array.shape[0])
ValueError: When changing to a larger dtype, its size must be a divisor of the total size in bytes of the last axis of the array.
My code is the following:
# prepare for file reading
data = pd.DataFrame() # create empty dataframe to hold final output data
file_counter = 0 # count how many files have been processed
event_counter = 0 # count how many events were in input files that have been processed
# loop over files in filename_list & add contents to dataframe
for df in uproot.pandas.iterate(filename_list, treename, branches=list(branchdict.keys()), entrysteps=100000, namedecode='utf-8', flatten=True):
df.rename(branchdict, axis='columns', inplace=True) # rename branches to custom names (defined in dictionary)
file_counter += 1 # manage file counting
event_counter += df.shape[0] # manage event counting
print(df.head(10)) # debugging
# apply all cuts
for cut in cutlist:
df.query(cut, inplace=True)
# append events to dataframe of data
data = data.append(df, ignore_index=True)
# terminal output
print('Processed '+format(file_counter,',')+' chunks (kept '+format(data.shape[0],',')+' of '+format(event_counter,',')+' events ({0:.2f}%))'.format(100*data.shape[0]/event_counter), end='\r')
I have been able to get it to work with flatten=False
(when printing the dataframe, it explodes the values out into columns similar to how is shown here: https://github.com/scikit-hep/uproot#multiple-values-per-event-fixed-size-arrays).
eventNumber runNumber totCandidates nCandidate ... D0_SubVtx_234_COV_[1][2] D0_SubVtx_234_COV_[2][0] D0_SubVtx_234_COV_[2][1] D0_SubVtx_234_COV_[2][2]
0 13769776 177132 3 0 ... -0.016343 0.032616 -0.016343 0.470791
1 13769776 177132 3 1 ... -0.016343 0.032616 -0.016343 0.470791
2 13769776 177132 3 2 ... -0.016343 0.032616 -0.016343 0.470791
3 36250092 177132 2 0 ... 0.004726 -0.017212 0.004726 0.193447
4 36250092 177132 2 1 ... 0.004726 -0.017212 0.004726 0.193447
[5 rows x 296 columns]
But I understand from the readme that not flattening these structures isn't recommended, at least for speed purposes - and since I have O(10^8) rows to get through, speed is somewhat of a concern. I'm interested in what's causing this, so I can find out the best way to handle these objects (& eventually write them out to a new file later). Thanks!
EDIT: I've narrowed the problem down to the branches
option. If I manually specify some branches (eg. branches=['eventNumber, D0_SubVtx_234_COV_']
) then it works fine with both flatten=True
and False
. But when using this list(branchdict.keys())
, it gives the ValueError shown at the top of the original question.
I've checked this list, & all the elements in it are real branch names (or else it gives a KeyError instead) - it contains 206 regular branches, some of which contain standard data types and others contain length-1 lists of single data types, plus 10 branches containing similar 3x3 matrices.
If I remove the branches containing the matrices from this list, then it works as expected. The same is true if I remove only the length-1 lists. The crash occurs whenever I try to read (separate) branches containing both these length-1 lists and these 3x3 matrices.