Image for post
Image for post

The Cybernetic Revolution: The Influence of Metrology on the User Experience in Human-Robot Interaction

Jeremy Marvel, Research Scientist and Project Leader, Manipulation and Mobility Systems Group, National Institute of Standards and Technology

We are living in a world in which we are surrounded by technology tailored to our needs. Our clothes are treated with nanoparticles to resist wrinkles and stains. We have sent probes beyond the farthest reaches of our solar system, and we have selfie-taking machines exploring celestial neighbors and conducting revolutionary experiments that alter our understanding of the universe. Artificial intelligence is omnipresent and impacts the way we drive, entertain ourselves, read the news and make dinner. We can even carry out complex social relationships through online video games without ever having to physically meet another human being. The world’s knowledge can be accessed in mere seconds on computers we carry in our pockets. In our pockets! Clearly, we are living in the future so frequently and fancifully predicted in popular culture.

But … where is the plastic pal who’s fun to be with that I was told would be waiting for me?

Robots are becoming increasingly prevalent in the manufacturing, medical and service fields. Robots are purposefully designed to work around and with people. Robots are even marketed as being “collaborative” in that they are supposedly safer and easier to use than ever. In every case, robots are custom-tailored for their users’ needs. Such trends imply robotics are becoming consumer products.

Image for post
Image for post

In the home, however, robots are largely limited to hobbyist projects, STEM toys and single-purpose cleaning appliances. Revolutionary and sociable robots are being introduced to an eager market, only to fall short of the capabilities of the simpler, task-built devices that merely sit on a shelf. So why the discrepancy?

In reality, there is no discrepancy. It’s the task and the utility of a given robot that allows it to be custom-designed for the end-user. Specific tasks get specific robots that are built to be user-friendly. General tasks get … something else. When the task is unknown or ill-defined, the manufacturer must anticipate all possible — or at least all supported — applications and design around that.

All robots are purpose-built, principally because there is a trade-off between simplicity and functionality. To be usable and useful, the interfaces connecting people and machines must carefully traverse the path that is flanked by “too complex” and “too simple.” The real challenge lies in the realization that experts in the field don’t accurately know where that path is, how wide it is or to where it leads. The purpose of the interface is to facilitate communication and drive interaction. It relays important information to the person working with the machine, and it provides a mechanism for expressing the user’s desired actions. The challenge, however, is in balancing usability for a broad spectrum of users while simultaneously providing useful products. To find that Goldilocks “just right” mix of comfort and functionality often requires a lot of trial and error, especially if the ultimate application of the machine is unknown.

And that’s if all is working as it should be.

When things start to go wrong, it can be extremely difficult to diagnose the problem or predict how bad things will get. More intelligence is needed to assess the situation and provide a good prognosis. Assuming that such a prognosis is found and that it’s accurate, how is the robot supposed to share this information such that an untimely fate is avoided? That’s the interface’s job.

A good interface can enhance a user’s experience, while a bad interface can render a machine completely unusable. Thus, the interface drives the experience. Similarly, the means by which we interact with the machines dictates their utility. By changing the interface, one can effectively change how a given robot is used … or if it’s used.

Image for post
Image for post

As such, an interface that can efficiently adapt to a user or a task is theoretically more useful for that task than an interface that attempts to accommodate all possible tasks or behaviors. To be able to accomplish this, however, the robot needs to be aware of its environment and the user.

While we have some basic tenets to help us differentiate good graphical interfaces from bad ones, there are no metrics by which vendors can measure the effectiveness and efficiency of the interaction between people and robots before the robots are sold and used. Nor are there any standardized means by which we can measure how much a given interface or interaction will be better than another. Currently, the best metrics for measuring the effectiveness of human-machine interactions are through subjective, qualitative, user-volunteered reports. There are few objective, quantitative measures by which a given interaction can be assessed.

If such quantitative metrics existed, however, a robot could adjust its behaviors to match the user and the application, working as a collaborative tool to enable the efficient completion of a task. Similarly, if such adjustments are perceived by the people working with the robot to be both intentional and appropriate, then their confidence in the performance of the robot is strengthened and they can, in turn, respond accordingly. This mutual situational awareness is critical for effective teaming, regardless if it’s on the factory floor or in your kitchen at home. If the interaction breaks down, so too does the team’s performance.

This is the basis for a new research project at NIST, the Performance of Human-Robot Interaction, which seeks to establish test methods and metrics for assessing and assuring the effective teaming of humans and machines. The provision of these metrics and test methods enables the benchmarking and advancement of technology and establishes a baseline of maintaining trust in the capabilities of the robot. Part of this project’s efforts includes reaching out to the world’s experts in human-robot interaction to develop a standardized measurement methodology.

In the recent workshop, Test Methods and Metrics for Effective HRI in Collaborative Human-Robot Teams, NIST researchers and world experts established both the need and means by which human-robot interaction can be objectively measured and replicated. These needs take into account both applications and intercultural issues that drive the user experience and mechanisms for interaction. Ultimately, this workshop kick-started a concerted effort to advance collaborative robot technologies into the future.

So, perhaps someday soon, we’ll get those robots.

This post originally appeared on Taking Measure, the official blog of the National Institute of Standards and Technology (NIST) on May 1, 2019.

To make sure you never miss our blog posts or other news from NIST, sign up for our email alerts.

About the Author

Image for post
Image for post

Jeremy Marvel is a research scientist and project leader in NIST’s Intelligent Systems Division. Jeremy has over 15 years of research experience in robotics and artificial intelligence, working in academia, industry, and government. Jeremy’s fields of expertise include human-robot and robot-robot collaboration, machine learning for adaptive robot control, and robot safety. When he’s not at work playing with robots, Jeremy actively participates in STEM outreach and enjoys going on adventures with his daughter.

Written by

NIST promotes U.S. innovation by advancing measurement science, standards and technology in ways that enhance economic security and improve our quality of life.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store