0
research-article

Comparison of Objective Rating Techniques vs. Expert Opinion in the Validation of Human Body Surrogates

[+] Author and Article Information
F. Scott Gayzik

Wake Forest School of Medicine, Virginia Tech-Wake Forest University Center for Injury Biomechanics, 575 N. Patterson Ave, Winston Salem, NC, 27101
sgayzik@wakehealth.edu

Matthew Davis

Wake Forest School of Medicine, Virginia Tech-Wake Forest University Center for Injury Biomechanics, 575 N. Patterson Ave, Winston Salem, NC, 27101
mattdavi@wakehealth.edu

Bharath Koya

Wake Forest School of Medicine, Virginia Tech-Wake Forest University Center for Injury Biomechanics, 575 N. Patterson Ave, Winston Salem, NC, 27101
bkoya@wakehealth.edu

Jeremy M. Schap

Wake Forest School of Medicine, Virginia Tech-Wake Forest University Center for Injury Biomechanics, 575 N. Patterson Ave, Winston Salem, NC, 27101
jschap@wakehealth.edu

Fang-Chi Hsu

Wake Forest School of Medicine, Department of Biostatistical Sciences, Division of Public Health Sciences, 525 Vine St, Winston Salem, NC, 27101
fhsu@wakehealth.edu

1Corresponding author.

ASME doi:10.1115/1.4042126 History: Received February 15, 2018; Revised November 21, 2018

Abstract

Objective evaluation (OE) methods provide quantitative insight into how well time history data from computational models match data from physical systems. Two techniques commonly used for this purpose are CORA and the ISO/TS 18571 standards. These ostensibly objective techniques have differences in their algorithms that lead to discrepancies when interpreting their results. The objectives of this study were 1) to apply both techniques to a dataset from a computational model, and compare the scores and 2) conduct a survey of subject matter experts (SMEs) to determine which OE method compares more consistently with SME interpretation. The GHBMC male human model was used in simulations of biomechanics experiments, producing 58 time history curves. Because both techniques produce phase, size, and shape scores, 174 pairwise comparisons were made. Statistical analysis revealed significant differences between the two OE methods for each component rating metric. SMEs (n=40) surveyed scored how well the computational traces matched the experiments for the three rating metrics. SME interpretation was found to statistically agree with the ISO shape and phase metrics, but was significantly different than the ISO size rating. SME interpretation agreed with the CORA size rating. The findings suggest that when possible, engineers should use a mixed approach to reporting objective ratings, using the ISO shape and phase methods, and size methods of CORA. We recommend to weight metrics greatest to least for shape, phase and size. Given the general levels of agreement observed, and the sample size, the results require a nuanced interpretation.

Copyright (c) 2018 by ASME
Your Session has timed out. Please sign back in to continue.

References

Figures

Tables

Errata

Some tools below are only available to our subscribers or users with an online account.

Related Content

Customize your page view by dragging and repositioning the boxes below.

Related Journal Articles
Related eBook Content
Topic Collections

Sorry! You do not have access to this content. For assistance or to subscribe, please contact us:

  • TELEPHONE: 1-800-843-2763 (Toll-free in the USA)
  • EMAIL: asmedigitalcollection@asme.org
Sign In