Home Topics Schedule Submissions IEEE/RAS HRI Metrics Study Group Organizers

4th Annual Workshop on Novel and Emerging Test Methods & Metrics for Effective HRI
@ HRI 2022
March 11, 2022, 10:00-18:00 EST (UTC-5) on Google Meet

Access information:
To join the video meeting, click this link: https://meet.google.com/sxs-dmkb-cdp
Otherwise, to join by phone, dial +1 224-662-0495 and enter this PIN: 623 488 097#
To view more phone numbers, click this link: https://tel.meet/sxs-dmkb-cdp?hs=5

Please contact shelly.bagchi@nist.gov with any questions about the workshop.

Schedule: Please see the detailed schedule here.

Workshop intro slides

Confirmed Speakers:

"Experiential Robotics at Northeastern University"
Dr. Julie Marble, Executive Director for the Institute for Experiential Robotics, Northeastern University

Dr. Elizabeth Phillips, Human Factors and Applied Cognition Group, George Mason University
"Towards Robust Human-Robot Interaction: A Quality Diversity Approach"
Dr. Stefanos Nikolaidis, Interactive and Collaborative Autonomous Robotics (ICAROS) Lab, University of Southern California
"Defining HRI Metrics and Evaluation Methods for Robotics in Manufacturing"
Adam Norton, Associate Director of the New England Robotics Validation and Experimentation (NERVE) Center, University of Massachusetts - Lowell

Contributed Papers:
Competition as a design method to develop and evaluate ethical robots
Jimin Rhim & AJung Moon


On the Importance of Environments in Human-Robot Coordination
Matthew C. Fontaine*, Ya-Chuan Hsu*, Yulun Zhang*, Bryon Tjanaka, Stefanos Nikolaidis


Multimodal Bio-Behavioral Approaches to Study Trust in Human-Robot Collaboration
Aakash Yadav, Sarah K. Hopko, Yinsu Zhang, Ranjana K. Mehta


Towards Formalizing HRI Data Collection Processes
Zhao Han and Tom Williams


Characterizing Task Relevant Human Behavior Using a Model Free Metric
Michael Lewis, Katia Sycara, Dana Hughes, Huao Li, and Tianwei Ni


Measuring Intention to Use in HRI - A Parsimonious Model
Ruben Huertas-Garcia, Santiago Forgas-Coll, Antonio Andriella, Guillem Alenyà


About the workshop

Despite large advances in robot interfaces and user-centric robot designs, practical implementations of HRI technologies continue to elude industry. A critical barrier limiting practical human-robot teaming is the lack of consistent test methods and metrics for assessing HRI research. Hence, repeatable and robust evaluations for HRI are vital to reduce the gap between HRI research and implementation.

This full-day, virtual workshop at the 2022 ACM/IEEE HRI Conference is set to engage the HRI community from domains including manufacturing, retail, and health to formulate solutions regarding the use of effective test methods and metrics for evaluating HRI research. This workshop is driven by the need for pushing the boundaries in HRI research by establishing benchmarks and standards with a focus on test methods and metrics in inter-disciplinary collaborations and multi-domain applications. Specific goals include:

  • to develop and encourage the use of consistent metrology for HRI, producing quality data sets of pragmatic applications, and validating human subject studies for HRI;
  • to explore novel and emerging metrology tools that have broad applicability across HRI domains, including validated surveys;
  • to support a discussion about best practices in metrology and what features should be measured as the underlying theory of HRI advances; and
  • to encourage the creation and sharing of high-quality, consistently formatted datasets for HRI research.

Discussion Topics

Presentations by contributing authors will focus on the documentation of the test methods, metrics, and data sets used in their respective studies. Keynote and invited speakers will be selected from a targeted list of HRI researchers across a broad spectrum of application domains. Poster session participants will be selected from contributors reporting late-breaking evaluations and their preliminary results.

Discussions are intended to highlight the various approaches, requirements, and opportunities of the research community toward assessing HRI performance, enabling advances in HRI research, and establishing trust in HRI technologies. Specific topics of discussion will include:

  • reproducible and repeatable studies with quantifiable test methods and metrics;
  • best practices and pitfalls in designing and executing human-subject studies;
  • human-robot collaboration and teaming test methods;
  • human dataset transferability and traceability;
  • HRI metrics (e.g., situation and cultural awareness);
  • industry-specific metrology requirements.

Finally, this workshop is the fourth in a series of workshops leading toward formalized HRI performance standards. Previous workshops have been leveraged to target community and consensus building, and the IEEE Robotics and Automation Society has advanced two new standards development efforts to support this. The first (IEEE P3107) is focused on developing consistent terminology for HRI technologies, and the second (IEEE P3108) is working to establish best practices in human-subject studies. Discussions of these efforts will be included in the workshop, and related standards meetings will be held in conjunction with the workshop.


Important Dates

  • 23 February 2022: Submission deadline for extended abstracts
  • 2 March 2022: Notification of acceptance for workshop presentations
  • 11 March 2022: Full-day workshop to take place virtually

Extended abstracts (1-2p, references excluded) will be accepted for the workshop. In-progress or proposed work may also be submitted. Please submit your abstract in IEEE Conference format.

The categories for submissions are:

  • Validation or Expansion of Established Metrics & Test Methods for HRI (e.g. surveys and scales);
  • Development of Novel Metrics & Test Methods for HRI;
  • Development of Datasets for HRI;
  • Replication Studies.

If your submission does not fit into one of these categories, please choose the closest or contact us. We would like to include all types of submissions related to Test Methods & Metrics, so these categories are not restrictive.

Click here or visit https://forms.gle/W6F8sKpHDPCsgJ7eA to submit to the workshop.