In books (e.g. "Barcodes for Mobile Devices", ISBN 978-0-521-88839-4), Papers (e.g. "Bar Codes May Have Poorer Error Rates Than Commonly Believed", DOI: 10.1373/clinchem.2010.153288) or websites information about the accuracy or error rates of barcodes are given.
The given numbers vary for e.g. Code39 from 1 error in 1.7 million, over 1 error in 3 million to 1 error in 4.5 million.
Where do these numbers come from and how can one calculate it (e.g. for Code39)?
In the definition of Code39 in ISO/IEC 16388:2007 I also couldn't find usefull information.
The "error rate" these numbers describe is the read error rate, i.e. how often a barcode may be read incorrectly when scanned. In order for barcodes to be useful this needs to be a very low value and so barcode formats that have lower read error rates are potentially better (although there are other factors involved as well).
These numbers are presumably determined by scientific testing. In the website you linked to there is a further link to a study by Ohio University that describes the methodology they used, which is an example of how this can be done: