Home Topics Schedule Submissions Organizers

HRI 2020 Workshop on Test Methods and Metrics for Effective HRI in Real World Human-Robot Teams

Note: Due to the cancellation of HRI 2020, the workshop will not be taking place in-person. However, we plan to host an abbreviated virtual workshop on March 23, 2020, with a tentative start time of 10 am US Eastern Time (UTC-4; check your time zone here). Please register here if you are interested in attending and keep an eye on your email for an updated schedule and connection instructions. If you have any urgent questions or concerns, please contact us. Thanks!

>> Register now for the virtual workshop!

Abbreviated schedule (all times EDT, GMT-4)
10:00 - Opening Remarks - Jeremy Marvel (Presentation)
10:10 - Contributing Author - Miruna-Adriana Clinciu (abstract)
10:30 - Contributing Author - Frank Förster (abstract)
10:50 - Contributing Author - Kourosh Darvish (abstract)
11:10 - Contributing Author - Rob Semmens (abstract)
11:30 - Invitational Speaker - Sophie Wang, Clemson University
12:00 - Break
12:10 - Contributing Author - Andrey Rudenko (abstract, presentation, THOR data set)
12:30 - Contributing Author - Yigit Topoglu (abstract)
12:50 - Contributing Author - David St-Onge (abstract)
13:10 - Contributing Author - Chittaranjan Swaminathan (abstract, presentation, video)
13:30 - Closing Remarks
13:40 - Overflow discussion (additional presentations, etc.)

Coming soon: Links to the contributing authors' extended abstracts

Despite large advances in robot interfaces and user-centric robot designs, the need for effective HRI continues to present challenges for the field of robotics. A key barrier to achieving effective human-robot teaming in a multitude of domains is that there are few consistent test methods and metrics for assessing HRI effectiveness. The necessity for validated metrology is driven by the desire for repeatable and consistent evaluations of HRI methodologies.

This full-day workshop at the 2020 ACM/IEEE HRI Conference in Cambridge, UK, will address the issues surrounding the development of test methods and metrics for evaluating HRI performance across the multitude of application domains, including industrial, social, medical, field and service robotics. This workshop is driven by the need for establishing consistent standards for evaluating HRI in real-world applications, and how the interfaces, technologies, and underlying theories impact the effective collaboration of human-robot teams. Specific goals include the following:

  • to develop and encourage the use of consistent test methods and metrics in evaluating HRI technologies, producing quality data sets of pragmatic applications, and validating human subject studies for HRI;
  • to establish benchmarks and baselines along a spectrum of key performance indicators for assessing and comparing novel HRI systems and applications;
  • to support a discussion about best practices in metrology and what features should be measured as the underlying theory of HRI advances;
  • to encourage the creation and sharing of high-quality, consistently-formatted datasets for HRI research; and
  • to promote the development of reproducible, metrics-oriented studies that seek to understand and model the human element of HRI teams.

Important Dates

  • 14 February 2020: Deadline for all submissions (Full Paper, Extended Abstract, Replication Study Proposal)
  • 28 February 2020: Notification of acceptance for workshop presentation
  • 23 March 2020: Full-day Workshop to take place at Fitzwilliam College, Cambridge, UK
  • Find deadlines for the THRI Special Issue here.

Discussion Topics

Presentations by contributing authors will focus on the documentation of the test methods, metrics, and data sets used in their respective studies. Keynote and invited speakers will be selected from a targeted list of HRI researchers across a broad spectrum of application domains. Poster session participants will be selected from contributors reporting late-breaking evaluations and their preliminary results.

Discussions are intended to highlight the various approaches, requirements, and opportunities of the research community toward assessing HRI performance, enabling advances in HRI research, and establishing trust in HRI technologies. Specific topics of discussion will include:

  • reproducible and repeatable studies with quantifiable test methods and metrics;
  • systems papers discussing applications and task-specific metrics;
  • human-robot collaboration and teaming test methods;
  • human data set content transferability, and traceability;
  • HRI metrics (e.g., situation and cultural awareness);
  • human-machine interface metrics; and
  • industry-specific metrology requirements.

Finally, this workshop is the second in a series of workshops leading toward formalized HRI performance standards. The IEEE Robotics and Automation Society (RAS) will hosting and supporting this standardization effort. Early workshops are intended to target community and consensus building, and on the establishment of a culture of repeatable and reproducible, metrology-based research in HRI. A third workshop is planned for the 2021 ACM/IEEE International Conference on Human Robot Interaction, and will specifically address the action items identified in this year's workshop. A workshop report documenting the presentations, discussions, and ensuing take-away and action items will be produced, and made publicly available.

Schedule

The workshop schedule is shown below. The structure follows that of last year's workshop, and is formatted such that the first half of the day focuses on the technical aspects of metrology for effective, real-world HRI, and features a keynote speaker and technical presentations of peer-reviewed, contributed papers. The second half of the day will focus on international efforts that explore repeatability, reproducibility, traceability, and the impacts of demographics, culture, and study design on the impacts and results of HRI research.

Morning Session
Welcome & Introduction
Keynote - Dr. Laurel Riek (UC San Diego)
Accepted Paper Presentations
Invited Talk 1
Poster Session - Metrics & Test Methods
Invited Talk 2
Breakout Session
Lunch
Afternoon Session
Introduction & Recap
Invited Talk 3
Concept Poster Session - Repeatability Studies
Invited Talk 4
Breakout Session
Next Steps Discussion
Confirmed Speakers (Schedule TBD):
Dr. Laurel Riek (UC San Diego) - Keynote
Dr. Kerstin Fischer (Southern Denmark University) - Tentative topic: "An Interactionalist Perspective on the Development of KPIs for Human-Robot Teaming"
Dr. Sarah Fletcher (Cranfield University)
Dr. Megan Strait (UT-Rio Grande Valley) - Tentative topic: "Reproducibility of the uncanny valley"

Accepted Submissions

Submissions of three types will be accepted for the workshop:

  • Full papers (6-8p), detailing completed work in the area of test methods & metrics for real-world HRI
  • Extended abstracts (up to 1p), detailing in-progress work related to test methods & metrics for real-world HRI
  • Extended abstracts (up to 1p), comprising short proposals for an HRI replication study (with the potential for funding); see preliminary details below.

Full paper submissions by contributing authors will be automatically be peer-reviewed and submitted to a Special Issue of the Transactions on Human Robot Interaction journal, scheduled for publication in March of 2021. Additionally, selected authors will give a short presentation at the workshop. Extended abstracts of both types will be eligible for informal poster presentations; further details to follow upon selection.

The HRI replication study proposals are intended to facilitate broader user studies and further validate existing research by testing the replicability & repeatability of previous research. Please provide an overview of research you would like to replicate, the motivation for doing so, and how you plan to extend the scope of the work through your replication study. Submissions to the workshop will be considered preliminary proposals, to obtain feedback from the community and generate discussion on the needs for replicability studies. After the workshop, a number of proposals are expected to be selected to provide the bases for funded repeatability studies.

Click here or visit https://forms.gle/CF9etMExSAaSZYCq8 to submit to the workshop.

Organizers