Please contact firstname.lastname@example.org with any questions about the workshop.
"Experiential Robotics at Northeastern University"
|Dr. Elizabeth Phillips, Human Factors and Applied Cognition Group, George Mason University
|"Towards Robust Human-Robot Interaction: A Quality Diversity Approach"
Dr. Stefanos Nikolaidis, Interactive and Collaborative Autonomous Robotics (ICAROS) Lab, University of Southern California
|"Defining HRI Metrics and Evaluation Methods for Robotics in Manufacturing"
Adam Norton, Associate Director of the New England Robotics Validation and Experimentation (NERVE) Center, University of Massachusetts - Lowell
| Competition as a design method to develop and evaluate ethical robots
Jimin Rhim & AJung Moon
| On the Importance of Environments in Human-Robot Coordination
Matthew C. Fontaine*, Ya-Chuan Hsu*, Yulun Zhang*, Bryon Tjanaka, Stefanos Nikolaidis
| Multimodal Bio-Behavioral Approaches to Study Trust in Human-Robot Collaboration
Aakash Yadav, Sarah K. Hopko, Yinsu Zhang, Ranjana K. Mehta
| Towards Formalizing HRI Data Collection Processes
Zhao Han and Tom Williams
| Characterizing Task Relevant Human Behavior Using a Model Free Metric
Michael Lewis, Katia Sycara, Dana Hughes, Huao Li, and Tianwei Ni
| Measuring Intention to Use in HRI - A Parsimonious Model
Ruben Huertas-Garcia, Santiago Forgas-Coll, Antonio Andriella, Guillem Alenyà
Despite large advances in robot interfaces and user-centric robot designs, practical implementations of HRI technologies continue to elude industry. A critical barrier limiting practical human-robot teaming is the lack of consistent test methods and metrics for assessing HRI research. Hence, repeatable and robust evaluations for HRI are vital to reduce the gap between HRI research and implementation.
This full-day, virtual workshop at the 2022 ACM/IEEE HRI Conference is set to engage the HRI community from domains including manufacturing, retail, and health to formulate solutions regarding the use of effective test methods and metrics for evaluating HRI research. This workshop is driven by the need for pushing the boundaries in HRI research by establishing benchmarks and standards with a focus on test methods and metrics in inter-disciplinary collaborations and multi-domain applications. Specific goals include:
Presentations by contributing authors will focus on the documentation of the test methods, metrics, and data sets used in their respective studies. Keynote and invited speakers will be selected from a targeted list of HRI researchers across a broad spectrum of application domains. Poster session participants will be selected from contributors reporting late-breaking evaluations and their preliminary results.
Discussions are intended to highlight the various approaches, requirements, and opportunities of the research community toward assessing HRI performance, enabling advances in HRI research, and establishing trust in HRI technologies. Specific topics of discussion will include:
Finally, this workshop is the fourth in a series of workshops leading toward formalized HRI performance standards. Previous workshops have been leveraged to target community and consensus building, and the IEEE Robotics and Automation Society has advanced two new standards development efforts to support this. The first (IEEE P3107) is focused on developing consistent terminology for HRI technologies, and the second (IEEE P3108) is working to establish best practices in human-subject studies. Discussions of these efforts will be included in the workshop, and related standards meetings will be held in conjunction with the workshop.
Extended abstracts (1-2p, references excluded) will be accepted for the workshop. In-progress or proposed work may also be submitted. Please submit your abstract in IEEE Conference format.
The categories for submissions are:
If your submission does not fit into one of these categories, please choose the closest or contact us. We would like to include all types of submissions related to Test Methods & Metrics, so these categories are not restrictive.