Findings and methodology are presented for an assessment of the facial image processing software developed by Carnegie Mellon University (CMU).
The assessment was divided into three phases, one for each of the three algorithms (Face Detection, Face Recognition, and Periocular Face Reconstruction). Results were computed with a software framework to maximize the consistency of the assessments. The process was designed to provide a one-to-one comparison between algorithms by processing both the benchmarks and the algorithm being assessed, using the same dataset and metrics. Summary results for the assessment are provided for each phase. In each case, the metric values are broken out by algorithm and dataset. The CMU-generated, CNN-based software generally out-performed the benchmark comparisons, although there were a few exceptions that are noted. The assessment found the CMU Ultron performance to be comparable to tinyFace. PittPatt performance is inferior to the Ultron/tinyFace, and YOLO has the lowest performance. In the recognition domain, the CMU-developed Native CMU outperformed across three datasets without any tuning. 28 references and extensive tables and figures
Downloads
Similar Publications
- Reducing Disproportionality in School Discipline among Black Male High School Students: A Randomized Evaluation of a Comprehensive, Whole-School Intervention
- Linking Ammonium Nitrate – Aluminum (AN-AL) Post-Blast Residues to PreBlast Explosive Materials Using Isotope Ratio and Trace Elemental Analysis for Source Attribution
- Optimizing the Analysis of DNA from Burned Bone Using Ancient DNA Techniques