I wrote a parser for a custom file using attoparsec
.
The profiling report indicated that around 67% of the memory allocation is done in a function named tab
, which also consumes the most time.
The tab
function is pretty simple:
tab :: Parser Char
tab = char '\t'
The entire profiling report is as follows:
ASnapshotParser +RTS -p -h -RTS
total time = 37.88 secs (37882 ticks @ 1000 us, 1 processor)
total alloc = 54,255,105,384 bytes (excludes profiling overheads)
COST CENTRE MODULE %time %alloc
tab Main 83.1 67.7
main Main 6.4 4.2
readTextDevice Data.Text.IO.Internal 5.5 24.0
snapshotParser Main 4.7 4.0
individual inherited
COST CENTRE MODULE no. entries %time %alloc %time %alloc
MAIN MAIN 75 0 0.0 0.0 100.0 100.0
CAF Main 149 0 0.0 0.0 100.0 100.0
tab Main 156 1 0.0 0.0 0.0 0.0
snapshotParser Main 153 1 0.0 0.0 0.0 0.0
main Main 150 1 6.4 4.2 100.0 100.0
doStuff Main 152 1000398 0.3 0.0 88.1 71.8
snapshotParser Main 154 0 4.7 4.0 87.7 71.7
tab Main 157 0 83.1 67.7 83.1 67.7
readTextDevice Data.Text.IO.Internal 151 40145 5.5 24.0 5.5 24.0
CAF Data.Text.Array 142 0 0.0 0.0 0.0 0.0
CAF Data.Text.Internal 140 0 0.0 0.0 0.0 0.0
CAF GHC.IO.Handle.FD 122 0 0.0 0.0 0.0 0.0
CAF GHC.Conc.Signal 103 0 0.0 0.0 0.0 0.0
CAF GHC.IO.Encoding 101 0 0.0 0.0 0.0 0.0
CAF GHC.IO.FD 100 0 0.0 0.0 0.0 0.0
CAF GHC.IO.Encoding.Iconv 89 0 0.0 0.0 0.0 0.0
main Main 155 0 0.0 0.0 0.0 0.0
How do I optimize this?
The entire code for the parser is here. The file which I'm parsing is around 77MB.
tab
is a scapegoat. If you defineboo :: Parser (); boo = return ()
and insert aboo
before every bind in thesnapshotParser
definition, the cost allocations will become something like:So it seems the profiler is shifting the blame for the allocations of the parse results, likely due to extensive inlining of
attoparsec
code, as John L suggested in the comments.As for the performance issues, the key point is that, as you are parsing a 77MB text file to build a list with a million elements, you want the file processing to be lazy, and not strict. Once that is sorted out, decoupling I/O and parsing in
doStuff
and building the list of snapshots without an accumulator are helpful as well. Here is a modified version of your program taking that into account.This version should have acceptable performance even if you force the whole list of snapshots into memory, as I did in
main
here. To gauge what is "acceptable", keep in mind that, given the sixteen (small, unboxed) fields in eachSnapshot
plus the overhead of theSnapshot
and list constructors, we are talking about 152 bytes per list cell, which boils down to ~152MB for your test data. In any case, this version is about as lazy as possible, as you can see by removing the division inmain
, or replacing it bylast ss
.N.B.: My tests were done with attoparsec-0.12.